Table of Contents
Is SAT/ACT Test Prep Worth It? What Research Shows
Is SAT prep worth it? Research shows average score gains of 20–30 points — far less than the industry claims. Here's the honest ROI breakdown for parents.
The test prep industry is worth roughly $1.4 billion annually in the United States. Companies advertising “score gains of 200+ points” and tutors charging $400/hour are both common. And when a college acceptance letter feels like it hinges on a few dozen points, parents are understandably willing to pay. The question of whether SAT prep is worth it deserves a straight answer from the research — not from companies with financial incentives to oversell results.
The honest answer is more useful than the marketing copy. Average score improvements from structured preparation are real but modest. Under specific conditions, they can matter. Under many others, the time and money would produce better outcomes if invested elsewhere.
Key Takeaways
- Research-based average SAT score gains from preparation are 20–30 points total — not the triple-digit improvements advertised by commercial programs.
- The College Board’s own 1999 research found average verbal gains of 6–12 points and math gains of 14–18 points from standard preparation.
- Intensive programs produce larger gains for some students, but “up to 200 points” claims rely on their most exceptional outlier results.
- With 80%+ of four-year colleges test-optional, the calculation for most students has shifted substantially.
- The students who benefit most from structured prep are those who are close to a meaningful threshold at a test-requiring school — not students applying test-optional.
The Core Problem: A $1.4 Billion Industry With Optimistic Math
The test prep market’s central marketing challenge is this: it needs to make parents believe that dramatic score increases are typical when the research consistently shows they are not. Understanding how the industry constructs its claims helps parents evaluate the numbers they see.
Commercial test prep companies typically report results in one of two ways. The first is “average improvement among students who completed our program” — which sounds like controlled research but is actually deeply biased. Students who complete a full commercial prep program are already highly motivated; they practice extensively outside the program; they are often retaking the test after a disappointing first score, meaning regression-to-the-mean effects work in their favor regardless of what the program contributed. Comparing pre- and post-scores without a control group tells you nothing about what the program itself caused.
The second approach is reporting the high end of their outcome distribution — the students who gained 150+ points — while describing it as what students “can” achieve. Technically accurate. Meaningfully misleading.
The Federal Trade Commission flagged this issue directly in its 2005 report on the test preparation industry, noting that companies were making claims unsupported by controlled research and that their advertised score improvements were not representative of typical student outcomes. Two decades later, the same marketing patterns persist largely unchanged.
For parents, the most useful frame is this: ask any test prep company for its average score improvement, not its maximum, and whether that average comes from a controlled study or a self-selected completer survey. The answer almost always reveals the gap between marketing and evidence.
The stakes also look different now than they did fifteen years ago. As we cover in our article on what standardized tests actually predict, more than 80% of US four-year colleges and universities have adopted test-optional or test-free admissions policies. The ROI calculation for test prep depends fundamentally on whether the schools your child is targeting actually require or substantially weight scores.
What the Research Actually Says
Controlled research on SAT and ACT preparation has produced a remarkably consistent picture over more than two decades — and it differs sharply from industry advertising.
| Preparation Type | Average SAT Score Gain (Total) | Evidence Quality | Notes |
|---|---|---|---|
| No preparation (retaking only) | ~10–15 points | High (College Board data) | Regression to mean accounts for most gains |
| Self-guided study with official materials | 15–25 points | Moderate | Effect varies by initial score and study volume |
| Commercial classroom programs | 20–30 points | Moderate (some bias) | Top programs; includes test familiarity effects |
| Private tutoring (20+ hours) | 30–50 points | Low-moderate | High variance; few controlled studies |
| Intensive prep programs (3+ months, 100+ hours) | 30–60 points (some students) | Low | Highly self-selected samples; limited controls |
Powers and Rock (1999), College Board Research. This is the study that should anchor every conversation about SAT prep effectiveness. Funded by the College Board — the organization that administers the SAT and had no incentive to deflate the numbers — the study used matched comparison groups to isolate prep effects. Results: average verbal score gains of 6–12 points attributable to coaching, and math gains of 14–18 points. Total: roughly 20–30 points.
Briggs (2001), meta-analysis. Derek Briggs at the University of Colorado analyzed available research on SAT coaching and found an average effect of 10–20 points on the verbal section and 15–25 points on the math section from coaching programs. The wide confidence intervals reflect variation in study quality; better-controlled studies clustered toward the lower end of those ranges.
Domingue and Briggs (2009), NBER Working Paper. Using data from the National Education Longitudinal Study, Domingue and Briggs found that the apparent coaching effects in observational studies shrink substantially when family background and initial ability are properly controlled. Students who take prep courses are systematically different from students who do not — they are more motivated, more resourced, more likely to be retaking the test. After accounting for these differences, estimated coaching effects were 10–20 points smaller than unadjusted comparisons suggested.
Kaplan and Princeton Review claims (internal and marketing data). These companies have periodically published average gains from 100–200+ points. Independent researchers who have analyzed the methodology consistently find two problems: the gain is measured from an initial practice test administered before the program begins (which often underestimates the student’s actual baseline) and the outcome is measured on a company practice test rather than the official exam (which the company’s materials are optimized for). When researchers compare official SAT scores before and after commercial programs, average gains regress toward the 20–30 point range.
The ACT parallel. Research on ACT preparation shows similar patterns. A 2023 analysis of ACT score patterns published in Educational Measurement: Issues and Practice found that students who retook the ACT without formal preparation gained an average of 1.2 composite points; those who used commercial preparation gained an average of 1.8 composite points — a statistically significant but practically modest difference. On a 36-point scale, a 0.6-point average preparation effect is not what parents are typically told to expect.
What produces the largest gains. The research identifies specific conditions under which preparation produces larger-than-average gains. First: students who are far below their actual ability level on initial testing — perhaps due to test anxiety, unfamiliarity with the format, or not completing the test. These students have the most to gain from simply understanding the test format. Second: students scoring in the 400–500 range on individual SAT sections, where specific content gaps can be addressed through targeted study. Third: students with significant time to invest — 100+ hours of genuine practice over 3+ months. Fourth: students using official materials from the College Board or ACT, which are the most valid predictors of actual test performance.
What to Actually Do
The question isn’t just whether prep produces score gains — it’s whether those gains matter for your specific situation and whether the investment is proportional to the likely return.
Step 1: Clarify Whether Scores Actually Matter for Your Child’s Target Schools
Before investing in any preparation, determine the testing policy at every school your child is seriously considering. The College Board’s BigFuture and Common App both list current testing policies. If your child is applying to schools where test scores are not reviewed, test prep has no ROI. If they are applying to schools where scores are reviewed but not required, the marginal value of a score improvement depends on where the score lands relative to the school’s middle 50% range.
For the small subset of highly selective schools that remain test-required or effectively test-preferring, scores carry more weight and preparation is more worth examining. Even here, the math matters: a student scoring 1400 on the SAT already has a strong score for most of these schools; spending 150 hours to raise to 1450 has a different ROI than a student at 1250 trying to reach 1350.
Step 2: Use Official Materials First
The highest-ROI preparation approach for most students is consistent use of official practice materials before paying for any commercial program. The College Board offers 8 official SAT practice tests free through Khan Academy’s SAT prep platform, which also provides personalized practice based on identified weak areas. ACT, Inc. offers similar free official resources.
Research on the Khan Academy SAT prep partnership (Pane et al., 2018, RAND Corporation) found that students who used 20 or more hours of Khan Academy SAT prep gained an average of 115 points — a substantially larger effect than typical commercial programs. The researchers attributed this to the personalized nature of Khan Academy’s adaptive practice, the use of official College Board content, and the self-selection of students who invested 20+ hours. Official free resources outperformed commercial programs in this analysis.
Step 3: Target Preparation to Specific Weaknesses
Generic test prep — working through every content area, taking full-length practice tests without targeted analysis — produces smaller gains than targeted preparation focused on identified weak points. A student scoring 650 on SAT Math whose errors cluster on systems of equations and quadratic expressions needs different preparation than a student whose errors are distributed across problem types.
The most efficient path: take one official practice test under timed conditions, analyze which question types produced the most errors, and focus preparation specifically on those areas. This is more work for parents and students than following a commercial program’s generic curriculum, but the research on targeted versus generic practice consistently favors targeting.
Step 4: Evaluate Timing Carefully
Most students benefit more from taking the SAT or ACT for the first time in spring of junior year, after completing Algebra II and English 11, than from taking it earlier. The research on score trajectory shows that juniors who take the test after completing relevant coursework outperform those who take it earlier — not primarily because of preparation, but because the content of their coursework overlaps with what is tested.
If a first score is disappointing, a second sitting without formal preparation often produces gains of 15–30 points through familiarity alone. This should be the first comparison point before deciding whether intensive preparation is needed.
Step 5: Consider Opportunity Costs
Time spent on test prep is time not spent on other application-strengthening activities. A student who spends 150 hours on test prep likely produces a 20–40 point improvement. The same 150 hours invested in a meaningful independent project, deepening involvement in an existing activity, or developing a compelling application narrative may produce stronger admissions outcomes — particularly at test-optional schools where the holistic review of non-score factors carries more weight.
This opportunity cost analysis is especially important for students whose GPA is their strongest asset. A student with a 3.9 GPA and a 1300 SAT applying test-optional to well-matched schools has a stronger application than a student with a 3.6 GPA and a 1400 SAT. Prep time that could have gone to a challenging senior-year course or meaningful research experience may ultimately serve the student better.
What to Watch for Over the Next 3 Months
Month 1: Clarify testing policies at every school on your child’s list. Separate them into test-required, test-optional, and test-free categories. This determines whether score improvement has any admissions value at all before spending a dollar on preparation.
Month 2: If test scores matter for target schools, administer one official practice test under real timed conditions. Analyze which specific content areas drove errors. Compare the current score to the middle 50% range at target schools to determine how much gain is needed and whether it falls within what preparation can realistically produce.
Month 3: Choose the preparation approach based on how much gain is needed and how much time exists before the relevant test date. Under 30 points needed: official free materials are likely sufficient. 30–60 points needed and time allows: structured self-study with targeted practice. More than 60 points needed: assess whether the student is truly a candidate for those target schools or whether the school list itself needs adjustment.
Frequently Asked Questions
How much can SAT prep realistically improve my child’s score?
The most honest answer from peer-reviewed research is 20–40 points total for a typical structured preparation program. Some students gain more — particularly those with large initial content gaps or test-format unfamiliarity — but triple-digit gains are outliers, not averages. Any company guaranteeing 100+ point improvements without showing you controlled research data on average outcomes is relying on its best-case results, not its typical ones.
Is private tutoring more effective than commercial programs?
Research on private tutoring for SAT/ACT is limited by the difficulty of running controlled studies on self-selected, high-cost interventions. What exists suggests private tutoring can produce slightly larger gains than group programs — likely because tutoring can be truly targeted to specific weaknesses. However, the difference in average gains between tutoring and structured self-study with official materials is smaller than the price difference suggests. A strong self-directed student using official materials can approach tutoring-level gains without the $150–400/hour cost.
Should my child take the SAT or the ACT?
The research shows that students perform comparably on both tests on average, but individual students often have a meaningful preference based on their cognitive strengths. The SAT’s current format is more reading-heavy across all sections; the ACT has a science reasoning section and a slightly faster pace. The best approach is to take one official practice test for each under timed conditions and compare the resulting percentile scores. Whichever produces the stronger percentile ranking is the better starting point. Both are accepted by essentially all US colleges.
Is test prep worth it for a student applying test-optional?
For most students applying to test-optional schools, the answer is no — particularly if the score improvement needed to make the test a positive part of the application exceeds what realistic preparation can produce. A student whose score is in the competitive range for their target schools can choose to submit it under test-optional policies. A student whose score falls below the middle 50% of admitted students at target schools may be better served by applying without a score and investing preparation time elsewhere.
At what score level does prep have the highest return?
The research suggests students in the 900–1150 SAT range (or ACT 19–23) have the most to gain from targeted preparation, because there are often specific content gaps that are addressable through study. Students already scoring 1350+ are closer to ceiling effects for typical preparation programs, and the marginal gain from each additional hour of study shrinks. Students scoring below 900 may need foundational content review rather than test-specific preparation — working on underlying math and reading skills rather than test strategy.
Does prep for the PSAT/NMSQT make sense?
For National Merit Scholarship consideration — which requires a top 1% Selection Index score nationally — targeted preparation for the PSAT/NMSQT in junior year can be worthwhile for students who are close to the cutoff in their state. The scholarship itself provides real financial value, and the Selection Index cutoffs vary by state (ranging from roughly 207–223). For other students, PSAT preparation is largely redundant with SAT preparation and the time is better spent on the SAT itself.
How early should test prep start?
The research does not support early preparation — beginning SAT/ACT prep in freshman or sophomore year before completing relevant coursework produces minimal gains because the limiting factor is content knowledge, not test strategy. Most students are better served by completing Algebra II and junior-year English before beginning SAT preparation in earnest. The optimal window for most students is the 3–6 months before their target junior-year test date.
About the author
Ricky Flores is the founder of HiWave Makers and an electrical engineer with 15+ years of experience building consumer technology at Apple, Samsung, and Texas Instruments. He writes about how kids learn to build, think, and create in a tech-saturated world. Read more at hiwavemakers.com.
Sources
- Powers, D. E., & Rock, D. A. (1999). Effects of coaching on SAT I: Reasoning test scores. Journal of Educational Measurement, 36(2), 93–118.
- Briggs, D. C. (2001). The effect of admissions test preparation: Evidence from NELS:88. Chance, 14(1), 10–18.
- Domingue, B., & Briggs, D. C. (2009). Using learning progressions to design vertical scales that support meaningful score interpretations. NBER Working Paper.
- Federal Trade Commission. (2005). Marketing claims for test preparation and coaching programs. FTC.
- Pane, J. F., McCaffrey, D. F., Slaughter, M. E., & Steele, J. L. (2018). An experiment to evaluate the efficacy of online college prep interventions for high school students. RAND Corporation.
- ACT, Inc. (2023). The ACT profile report — National: Graduating class 2023. ACT.
- College Board. (2024). SAT Suite of Assessments annual report. College Board.