The most useful thing about your child’s neuropsychological evaluation isn’t the diagnosis at the end — it’s the map it gives us as teachers. This guide explains what’s actually in that map, and how the right educator uses it to help your child learn.
The evaluation is not a label. It’s a teaching tool.
If you’ve just received your child’s neuropsychological evaluation, you’re probably holding a 30-page document that feels, at first, like a verdict. A list of diagnoses. A stack of percentile scores. Recommendations in clinical language. Parents often describe the experience of first reading the report as receiving bad news — even when the report is mostly affirming, the volume of technical information makes it feel heavy.
We want to offer a different way of thinking about it.
In more than twenty years of tutoring NYC students, we’ve read a lot of neuropsychological evaluations. Here’s what those evaluations actually are, when they’re done well: they are the most specific, detailed teaching guide a parent will ever receive about their child. They tell an informed teacher, in extraordinary detail, how this particular child learns — what channels of input work, what channels don’t, where working memory gives out, how fast information can be processed, whether verbal or visual pathways are stronger, whether attention is steady or scattered, how stress affects performance.
A label tells you what category your child fits into. A teaching tool tells you how to teach them.
The diagnosis — “ADHD,” “dyslexia,” “anxiety,” “specific learning disorder with impairment in reading” — is a small portion of the value of a neuropsych evaluation. The real value is the cognitive profile underneath the diagnosis. Two kids with the same diagnosis can need radically different teaching approaches. Two kids without any diagnosis can still have profiles that call for specific instructional strategies. The diagnosis is a summary. The profile is the roadmap.
This is how good teachers use these reports:
- A tutor preparing to work with a new student reads the neuropsych report before the first session. They’re looking for the cognitive map — what’s strong, what’s weak, what modalities work, what to avoid.
- They design the instructional approach around the student’s actual profile rather than a generic curriculum. A student whose verbal reasoning is strong but whose working memory is weak gets a different lesson structure than a student whose verbal reasoning is weak but whose visual-spatial reasoning is exceptional.
- They use the report to predict where the student will struggle before they struggle. If processing speed is the 5th percentile, we know the student will need extended time for timed work, will fatigue on long passages, will require visual supports to hold information.
- They use it to calibrate expectations. A student who’s in the 92nd percentile for fluid reasoning but the 15th percentile for spelling is not being lazy about spelling. They have a specific wiring that makes spelling hard while their reasoning is unaffected.
- They use it to select the right materials, the right pacing, the right amount of repetition.
- They use it to protect the student from the wrong kinds of interventions — the ones that target the wrong thing, or that inadvertently highlight weaknesses while ignoring strengths.
This is an enormous difference from what happens when a teacher doesn’t have the report. Without it, a teacher is working from general classroom observation and their own pattern-matching. With it, they have specific data about how this child’s particular brain works.
The frame we want parents to hold while reading this guide, and while reading the actual evaluation, is: this document tells us how to teach your child. We are looking for instructional implications, not verdicts.
What the diagnosis actually does (and doesn’t do)
A quick word on diagnoses, because parents often fixate on them.
Diagnoses serve specific practical functions:
- They open legal doors — access to an IEP under IDEA, to 504 accommodations, to standardized test accommodations, to insurance coverage under laws like NY’s Dyslexia Diagnosis Access Act.
- They give parents and children a name for what they’re experiencing, which is often relieving.
- They connect families to communities of other families dealing with similar profiles.
- They guide broad categories of intervention — “your child needs structured literacy” is a decision that flows from a dyslexia diagnosis.
What diagnoses don’t do, and shouldn’t be asked to do:
- They don’t define your child. Your child is a full person with interests, relationships, ambitions, and strengths that no three-letter code captures.
- They don’t predict outcomes. Plenty of kids with dyslexia become accomplished writers. Plenty of kids with ADHD become focused professionals. Plenty of kids with anxiety become resilient adults.
- They don’t tell a teacher what to do in a specific session. That’s what the cognitive profile does.
- They don’t stay fixed. Brains change. Interventions work. Presentations shift. A diagnosis at age 8 is a snapshot, not a permanent record.
The diagnosis is the headline. The cognitive profile is the story. You and your child’s teachers need the story.
What’s actually in the evaluation, and why teachers care about it
The rest of this guide walks through the content of a neuropsych report — what the scores mean, what each common test measures, and how patterns tie together. But we want you reading it through the teaching-implications lens. Every section ends with a “what this means for instruction” note, because that’s the lens that matters.
Part 1: The anatomy of an evaluation report
A well-written neuropsych report has a predictable structure. The teaching-relevant information is spread across several sections, and it pays to know where to look.
Background / Reason for Referral
This section summarizes why the evaluation was requested — the concerns parents and teachers raised — plus the child’s developmental, medical, and school history. It should accurately reflect the questions the evaluation is trying to answer.
Why teachers read it: to understand the child’s trajectory. A child with a late-talking history and a family history of dyslexia is a different teaching context than a child whose reading issues emerged suddenly in fourth grade. The backstory shapes the approach.
Tests Administered
A list of every assessment given. This section tells you whether the evaluation was comprehensive. For a reading concern, we want to see a cognitive measure (WISC-V or equivalent), an achievement measure (WIAT-4 or equivalent), a phonological processing measure (CTOPP-2), fluency and rapid naming measures, and attention/executive function screening. A bare-bones list — cognitive + achievement only — is thin for most referral questions.
Why teachers read it: the battery dictates what questions the report can actually answer. We need to know what the evaluator looked at and what they didn’t.
Behavioral Observations
The evaluator’s notes on how the child presented during testing — attention, persistence, how they handled difficult items, mood, any factors that may have qualified performance.
Why teachers read it: this section is gold for instructional design. “She worked quickly but made errors when she stopped double-checking.” “He became frustrated quickly on tasks requiring reading, and worked more enthusiastically on visual puzzles.” “She rushed through timed tasks and performed better when told she had as much time as she needed.” Each of these is a direct instruction cue. The best teachers use behavioral observations almost more than they use the scores.
Test Results
The scores. Usually the longest section and the one parents find most opaque. We’ll decode the scoring systems next.
Why teachers read it: this is where the cognitive map lives. Scores tell us what cognitive channels work well, which ones don’t, and how to design around that profile.
Summary / Interpretation / Diagnostic Impressions
Where the evaluator synthesizes findings into a picture and assigns diagnoses. This is the section to read slowly and multiple times.
Diagnoses are typically given using codes from the DSM-5-TR or ICD-10. Examples:
- F81.0 — Specific Learning Disorder with Impairment in Reading (dyslexia)
- F81.1 — Specific Learning Disorder with Impairment in Written Expression (dysgraphia)
- F81.2 — Specific Learning Disorder with Impairment in Mathematics (dyscalculia)
- F90.0/F90.1/F90.2 — ADHD (inattentive, hyperactive-impulsive, or combined presentation)
- F84.0 — Autism Spectrum Disorder
- F41.1 — Generalized Anxiety Disorder
A diagnosis without a specific code and a severity rating (mild, moderate, severe) is a weak diagnosis. Push for specificity.
Why teachers read it: diagnoses matter for accessing services. But we’re often more interested in the narrative interpretation than the DSM code — the part that explains how the pieces fit together for this specific child.
Recommendations
The evaluator’s specific guidance on interventions, accommodations, and services.
Why teachers read it: this is where the evaluation translates into an instructional plan. But be aware that recommendations vary enormously in quality. A great report gives operationalized guidance: “Twice-weekly one-on-one instruction in a structured literacy program (Orton-Gillingham, Wilson Reading System, or IMSE) with a credentialed practitioner for a minimum of 60 minutes per session, sustained over 18-24 months.” A weak report says: “Continue to support reading at home.” If the recommendations in your report are vague, call the evaluator, or bring them to a skilled tutor who can help translate.
Part 2: Understanding the scoring systems
Three scoring systems show up in most neuropsych reports. Getting comfortable with them is the key to reading the rest of the document.
Standard scores
The most common format for composite scores (IQ indexes, achievement composites). Scaled so that:
- Mean: 100
- Standard deviation: 15
A score of 100 is exactly average. Scores between 85 and 115 are within one standard deviation of average — the middle 68% of the population. Scores between 70 and 130 capture the middle 95%. Below 70 suggests significant impairment; above 130 suggests superior ability.
General descriptive ranges:
| Standard Score | Descriptor | Percentile |
|---|---|---|
| 130+ | Very High / Gifted | 98+ |
| 120–129 | High / Superior | 91–97 |
| 110–119 | High Average | 75–90 |
| 90–109 | Average | 25–74 |
| 80–89 | Low Average | 9–23 |
| 70–79 | Borderline / Well Below Average | 2–8 |
| Below 70 | Extremely Low / Impaired | Below 2 |
Scaled scores
Used for individual subtests within larger batteries (like the subtests that make up the WISC-V). Different metric:
- Mean: 10
- Standard deviation: 3
So a subtest scaled score of 10 is average, 8–12 is the average range, 7 is roughly one SD below, 13 is one SD above. When you see “Block Design: 9, Matrix Reasoning: 12, Digit Span: 6” in a report, those are scaled subtest scores. The pattern of strengths and weaknesses across subtests is often more informative than any single score.
Percentile ranks
The percentile tells you where your child ranks compared to same-age peers. The 50th percentile means better than 50% of peers — exactly average.
Key benchmarks:
- 50th percentile = average (standard score 100)
- 16th percentile ≈ one SD below average (standard score 85)
- 2nd percentile ≈ two SDs below average (standard score 70)
- 84th percentile ≈ one SD above average (standard score 115)
- 98th percentile ≈ two SDs above average (standard score 130)
Percentiles are often the most intuitive format. “My child is at the 5th percentile for phonological awareness” is a clear picture: 95% of same-age kids do this better. That’s a significant clinical finding — and a direct instructional signal.
Confidence intervals
Every score has measurement error. Good reports include a confidence interval — a range within which the true score likely falls. “Verbal Comprehension Index: 118 (95% CI: 111–123)” means the evaluator is 95% confident the true score is somewhere between 111 and 123. Single numbers are never the whole truth; ranges are more honest.
Grade equivalents and age equivalents
You may see “Reading grade equivalent: 2.3,” meaning “performance at the level of an average second-grader, third month.” These sound intuitive but the American Educational Research Association specifically warns against over-interpreting grade equivalents because they rely on extrapolation and can exaggerate small differences. Standard scores and percentiles are more reliable.
The meaningful-gap rule
A practical rule: a difference of 1.0 to 1.5 standard deviations (15–22 points) between two composite scores is clinically meaningful. A child with a Verbal Comprehension Index of 125 and a Processing Speed Index of 82 has a 43-point gap. That’s nearly three standard deviations. That gap is the clinical finding — strong verbal reasoning with much slower processing. And critically, it’s the teaching finding: this child will need different supports for verbal tasks than for time-constrained tasks. Their classroom experience is different from a kid who’s average on both.
This is why the Full Scale IQ is often less meaningful than the individual indexes. An FSIQ of 103 hides wildly different profiles. A kid who’s average on everything and a kid who’s 125 in some areas and 82 in others both land at FSIQ 103 — but they need completely different teaching.
Part 3: The WISC-V decoded, through a teaching lens
The Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V) is the most commonly administered cognitive test for ages 6–16. Almost every neuropsych report includes it. It breaks into five Primary Index scores plus a Full Scale IQ.
Verbal Comprehension Index (VCI)
Measures verbal reasoning, language comprehension, and crystallized knowledge — what the child has learned through language exposure.
Core subtests: Similarities (“How are an apple and a banana alike?”) and Vocabulary.
What this means for teaching: A strong VCI tells us a student can absorb information through spoken or written language. They benefit from discussion, from explanations, from being taught “why” before “how.” A weak VCI — particularly when other indexes are stronger — signals that we should lean less on pure verbal instruction and build more visual, kinesthetic, and hands-on pathways into the lesson. We should also expect that new vocabulary will need explicit, multi-exposure teaching.
The VCI is notably sensitive to parental education level and language exposure. A VCI score should be contextualized with family history.
Visual Spatial Index (VSI)
Measures the ability to perceive, analyze, and mentally manipulate visual information.
Core subtests: Block Design (reproducing patterns with physical blocks) and Visual Puzzles.
What this means for teaching: Strong VSI is a gift we should lean into. Students with strong visual-spatial reasoning benefit from diagrams, flowcharts, timelines, mind maps, and geometric representations of abstract ideas. Math often clicks for these kids when it’s presented visually. Writing instruction benefits from visual organizers. Weak VSI means we should be careful with how we use diagrams — not because they’re bad, but because they may not provide the same scaffolding we assume they do. These kids may need more verbal explanation of what a chart is showing.
Fluid Reasoning Index (FRI)
Measures the ability to solve novel problems using logical inference.
Core subtests: Matrix Reasoning (identifying the missing element in a visual pattern) and Figure Weights (quantitative reasoning with a balance).
What this means for teaching: Strong FRI kids thrive on inductive discovery — “here are some examples, what’s the pattern?” They can often infer rules we haven’t explicitly taught. Weak FRI kids need us to teach rules directly, with plenty of examples, rather than assuming they’ll work it out. This has real implications for math instruction especially: strong FRI kids do well with problem-based learning; weak FRI kids need more explicit procedural instruction before they’re ready to generalize.
Working Memory Index (WMI)
Measures the ability to hold information in mind and manipulate it mentally.
Core subtests: Digit Span (repeating number sequences in various orders) and Picture Span (remembering picture sequences).
What this means for teaching: This is one of the most consequential indexes for instructional design. Weak working memory shows up in nearly every academic activity: following multi-step directions, reading comprehension (you have to hold the beginning of the sentence while processing the end), mental math, note-taking, writing (holding a thought while executing the motor act of writing), and test-taking. Students with weak working memory are often labeled “careless” or “inconsistent” when the actual issue is that they can’t hold all the relevant information in mind simultaneously.
Teaching adaptations for weak working memory: write multi-step directions down rather than giving them orally; teach one step at a time; use external scaffolding (checklists, graphic organizers, sticky notes); reduce the amount of material on a single page; break long tasks into discrete chunks with visible progress markers; teach explicit memory strategies.
Processing Speed Index (PSI)
Measures how quickly and accurately the child can process simple visual information under time pressure.
Core subtests: Coding (associating symbols with numbers and copying them quickly) and Symbol Search (scanning for targets among distractors).
What this means for teaching: Processing speed is the sleeper index. Most parents don’t know what it is — but it affects everything. A kid with weak processing speed may be as smart as peers but unable to finish timed work, keep up with classroom pace, or complete in-class assignments at the expected rate. They’re not being lazy. Their brain just processes basic information more slowly.
Teaching adaptations: build in extra time; prioritize quality over quantity on assignments; reduce volume (same rigor, fewer problems); avoid competitive speed-based activities that reinforce the deficit; teach the student to monitor their own pace and use strategies (not finishing every problem, marking and returning, pacing within sections). Test accommodations — extended time — are often appropriate.
Full Scale IQ (FSIQ)
The composite of all five indexes. Meaningful only when the underlying indexes are relatively similar. When there’s a significant gap, the FSIQ may not represent the child’s overall ability and good reports say so explicitly.
The General Ability Index (GAI)
Many reports also include the GAI, which is FSIQ minus working memory and processing speed. GAI is often used when:
- Working memory or processing speed weaknesses are suppressing the FSIQ
- There’s a learning disability question (LD is often defined as a gap between reasoning ability and academic achievement)
- Giftedness is being considered
If FSIQ is 108 but GAI is 128, your child’s reasoning is in the superior range — the FSIQ was held down by specific weaknesses. That’s a critical distinction, and a teaching signal: we should not underestimate this child’s thinking, but we should explicitly support the working memory and processing speed domains.
Part 4: Achievement testing — the gap that matters
Achievement tests measure what the child has actually learned. The key teaching question these tests answer: is there a gap between what this child can reason and what they’ve been able to learn?
Common achievement batteries
WIAT-4 (Wechsler Individual Achievement Test, 4th Edition) — commonly paired with WISC-V. Covers reading (word reading, pseudoword decoding, comprehension, fluency), writing (spelling, sentence composition, essay composition), math (problem solving, numerical operations, math fluency), and oral language.
WJ-IV (Woodcock-Johnson IV Tests of Achievement) — similar scope, different theoretical frame. Common in school psychologist reports.
KTEA-3 (Kaufman Test of Educational Achievement) — another comprehensive achievement test with its own subtest structure.
What teachers look for
The reasoning-achievement gap. A child with a GAI of 125 and a reading comprehension score of 88 has a significant achievement gap. That gap often drives the learning disability diagnosis. It’s also the teaching insight: this child can think at a much higher level than their reading skills currently allow them to demonstrate. They need instruction that builds the reading skills while keeping the reasoning engaged.
Patterns within subjects. A reading composite of 90 hides very different profiles. Decoding 75 with comprehension 105 suggests dyslexia (the child understands what they read aloud or hear, but can’t yet decode fluently). Decoding 100 with comprehension 80 suggests a different issue — often language-based. These two profiles call for completely different instructional approaches.
Fluency versus accuracy. Many dyslexic students are accurate but slow. Fluency scores often tell a different story than accuracy scores. For dyslexic kids, fluency is frequently the bigger teaching target.
Strengths to build from. Achievement testing reveals not just gaps but also genuine academic strengths. A child who’s below average in reading but exceptional in math is a specific teaching context. We don’t only remediate weaknesses; we also fuel strengths, because a child who knows they’re capable in some domain tolerates the difficulty of remediation in another.
Part 5: Domain-specific tests and what they reveal
For dyslexia and reading
- CTOPP-2 (Comprehensive Test of Phonological Processing, 2nd Ed.) — the standard measure of phonological processing. Low scores on Phonological Awareness and Rapid Symbolic Naming composites are the hallmark of dyslexia. For a teacher, CTOPP-2 results drive the choice of literacy program — they tell us which phonological skills to target and in what order.
- GORT-5 (Gray Oral Reading Test) — measures rate, accuracy, fluency, and comprehension of connected text. Provides a picture of actual oral reading that closely mirrors classroom concerns.
- TOWRE-2 (Test of Word Reading Efficiency) — timed word and pseudoword reading; the cleanest measure of reading fluency in isolation.
- TILLS — sometimes used for broader language assessment, particularly when language-based issues are suspected alongside reading.
Teaching implication: these tests tell us exactly which subskills of reading are breaking down, which lets us target the specific deficits rather than teaching generic “reading.” A student who can decode but can’t read fluently needs a different plan than one who can’t decode at all.
For ADHD and executive function
There is no single test that diagnoses ADHD. Good evaluations integrate multiple data sources:
- Parent and teacher rating scales — Conners-3, BASC-3, or Vanderbilt. Critical because ADHD diagnosis requires evidence of impairment in multiple settings.
- BRIEF-2 (Behavior Rating Inventory of Executive Function) — measures executive function weaknesses in real-world settings.
- Continuous Performance Tests like the Conners CPT-3 — computerized tasks measuring sustained attention. Useful but not definitive.
- WISC-V patterns — weak WMI and PSI are common in ADHD but not diagnostic on their own.
Teaching implication: the BRIEF-2 subscales are especially useful for classroom planning. Weakness in “Initiation” (starting tasks) calls for specific strategies different from weakness in “Working Memory” or “Emotional Control.” Good executive function coaching reads the BRIEF-2 and targets the specific weaknesses, rather than applying generic “organizational skills” curriculum.
For autism spectrum
- ADOS-2 (Autism Diagnostic Observation Schedule) — the gold-standard direct assessment, administered by a trained clinician.
- ADI-R — a structured parent interview about developmental history.
- SRS-2 (Social Responsiveness Scale) — parent and teacher rating scale.
- CARS-2 — clinician rating scale based on observation and interview.
An autism diagnosis requires clinical judgment integrating all of these — not a single test.
Teaching implication: autism is a broad spectrum, and the cognitive profile often matters more than the diagnosis. We look at how the specific child processes social information, how they manage sensory input, what their interests are (often a strength to leverage), and what communication supports work for them.
For anxiety, depression, and emotional concerns
- BASC-3 — broad behavioral rating scale, parent/teacher/self-report
- MASC-2 (Multidimensional Anxiety Scale for Children) — child and parent forms
- CDI-2 (Children’s Depression Inventory) — child self-report
Teaching implication: emotional factors affect academic performance in measurable ways. A student with significant test anxiety may have lower PSI scores than their “real” processing speed — because anxiety suppresses speed under timed conditions. A tutor reading the report needs to know this so they don’t misattribute anxiety-driven performance to cognitive weakness.
Part 6: Reading common profiles
Once you understand individual tests, the next skill is recognizing patterns. Here’s what different profiles typically look like, and what each implies for teaching.
Classic dyslexia profile
- Strong VCI and/or FRI — reasoning is good
- Relatively weaker WMI
- Low CTOPP-2 scores in Phonological Awareness and/or Rapid Symbolic Naming
- Low word reading and pseudoword decoding
- Reading comprehension suppressed by decoding difficulty — comprehension often rises dramatically when the student is read to or given accommodations
- Spelling well below expected level given verbal ability
- Math often age-appropriate unless word problems are involved
What this means for teaching: the student needs structured literacy — Orton-Gillingham or similar — targeted at the phonological weakness. But equally important, the student’s reasoning ability is intact, so we don’t dumb down content. We provide the decoding scaffolding while engaging the student’s full intellectual capacity. Audiobooks for age-appropriate content. Text-to-speech for school assignments. Spelling accommodations that don’t penalize them while they’re still learning. These aren’t crutches — they’re the bridge between where the decoding is and where the thinking already is.
ADHD profile
- Highly variable across tasks — scatter is the classic finding
- WMI and PSI often weaker than VCI and FRI (though not always)
- Elevated scores on Conners-3 or BASC-3 attention subscales from both parents and teachers
- BRIEF-2 shows elevated executive function concerns
- Achievement is often “underperformance relative to ability”
What this means for teaching: short focused work blocks, physical movement between blocks, immediate feedback, visual timers, external scaffolding for planning and organization. Pair instruction with structure the child can’t yet provide internally. Teach executive functioning skills explicitly — planning, time estimation, organization, self-monitoring — rather than expecting them to develop on their own. Use the child’s interests as motivational fuel.
Twice-exceptional (2E) profile
- Some indexes in superior or gifted range (often VCI, FRI)
- Some indexes or achievement areas significantly below average
- Full Scale IQ often looks “average” because high and low scores cancel
- The gap is the clinical finding
What this means for teaching: simultaneously gifted and learning-disabled students need simultaneous enrichment and remediation. A tutor working with a 2E kid cannot teach only the deficit — the student will disengage because they’re bored. Cannot teach only the strength — the gap widens. The right approach: enrichment-level intellectual content delivered through remediation-level structural support. This is hard. It’s also where the right tutor can make an enormous difference.
Language-based learning difference (non-dyslexic)
- VCI may be lower than other indexes
- Reading comprehension weaker than decoding (opposite of classic dyslexia)
- Listening comprehension also weak
- Formal language testing (CELF-5) shows concerns
What this means for teaching: vocabulary needs to be taught explicitly and repeatedly. Complex sentence structures need scaffolding. Writing instruction benefits from structure (templates, graphic organizers, explicit sentence frames). Often benefits from speech-language therapy as a complement to academic work.
Math disability (dyscalculia)
- Math scores significantly below reasoning ability
- Reading often intact (though some kids have both)
- VSI and/or FRI may be weak
- Working memory often weak, affecting multi-step calculation
What this means for teaching: concrete manipulatives before abstract symbols. Explicit number-sense work. Calculator accommodations for fact recall while conceptual understanding is being built. Visual representations of quantity and operation. Slower pacing through foundational concepts.
Anxiety’s footprint
- Performance inconsistent with apparent ability — the child “freezes” on some tasks
- PSI may be artificially depressed under timed conditions
- Behavioral observations note rumination, perfectionism, or avoidance
- Elevated scores on anxiety measures
What this means for teaching: the anxiety often needs to be addressed alongside the academic work. A tutor who helps a student build genuine competence in a subject can significantly reduce their anxiety about it. A therapist may also be needed. Extended time on tests reduces the anxiety that feeds the poor performance. Our test anxiety guide goes into this in depth.
A warning about profile-matching
Real profiles are messy. A child can have dyslexia and ADHD and anxiety. The purpose of knowing these profiles isn’t to diagnose your child yourself — it’s to read your child’s report with recognition and to understand what the evaluator has concluded. The diagnostic synthesis is the neuropsychologist’s job; pattern recognition in a report is the parent’s and teacher’s job.
Part 7: Evaluating the quality of the report itself
Not all neuropsych evaluations are equal. Here’s how to tell if yours was well done.
Signs of a quality evaluation
- Comprehensive test battery. For a learning concern, you should see cognitive + achievement + specific domain testing + attention/executive function screening.
- Multi-informant data. Parent and teacher rating scales provide cross-setting information that single test sessions can’t.
- Specific, thoughtful behavioral observations. Not “child was cooperative” but actual notes on how the child approached challenges.
- Interpretation that integrates data, not just lists it. A good report connects findings to the diagnostic conclusion with a clear narrative.
- Limitations acknowledged. Bad testing day, incomplete subtests, cultural or linguistic factors — good clinicians disclose these.
- Specific, actionable recommendations. Not “continue to support reading” but operationalized interventions with frequency, duration, and methodology specified.
- Evaluator available for follow-up. Good neuropsychologists expect feedback sessions, school meetings, and ongoing consultation.
Red flags
- Very short report (a thorough evaluation typically produces 20–40 pages)
- Diagnoses without DSM-5-TR codes or severity levels
- Generic, cut-and-paste recommendations that don’t reflect this specific child
- ADHD “diagnosed” from rating scales alone, without history or observation
- Rushed or absent feedback session
- Conclusions that don’t match the data in the body of the report
- Missing tests that the referral question requires
Part 8: Using the report with schools, tutors, and other providers
Read it multiple times
First read cover to cover, at your pace. Expect not to understand everything. Second read: focus on scores with this guide alongside. Third read: focus on recommendations and think through actionable next steps.
Have the feedback session — and bring questions
The evaluator schedules a feedback session to walk you through the report. Come with specific questions. The best ones to ask:
- What’s the single most important thing you’d want my child’s teachers to know?
- What should I watch for in the classroom given this profile?
- What’s the #1 intervention you’d prioritize if we could only do one thing?
- What should I say to the school?
- Are there tutors, therapists, or programs in NYC you’d specifically recommend?
- What does progress look like in 6 months? In 2 years?
- What should trigger us to come back for re-evaluation?
Share with the school strategically
For an NYC public school CSE/IEP meeting:
- Provide the report to the school before the meeting (at least 48 hours in advance)
- Highlight the recommendations section — these are the basis for services you’ll request
- Request specific services and accommodations tied to the recommendations
- Schools can disagree with a private evaluation, but they can’t ignore it — parents have rights under IDEA
For a 504 plan: less formal process; school 504 coordinators review the report and propose accommodations. Come with your proposed accommodations list.
For standardized test accommodations (SAT, ACT, SHSAT, ISEE, SSAT): each board has its own process; most require documentation plus evidence of accommodations use at school; apply early (months of lead time).
Share with tutors and other providers
This is where the teaching-tool framing really matters. A skilled tutor reads the report for:
- The cognitive profile (to calibrate approach)
- Achievement gaps (to prioritize what to work on first)
- Domain-specific findings (to select curriculum — which OG program, which math approach)
- Behavioral observations (to anticipate what will and won’t work)
- Recommendations (as starting hypotheses to test in practice)
When you hire a tutor, share the full report. A tutor who won’t engage with a neuropsych report is a tutor you should be wary of. A tutor who reads it carefully and starts the first session already calibrated to your child’s profile is the tutor you want.
Build a coordinated care team
Reports often indicate multiple providers: tutor, therapist, OT, speech-language pathologist. Think of them as a team. Providers who communicate with each other produce better outcomes than providers working in silos. Ask each how they’ll coordinate with the others. Consider serving as the communication conduit yourself if necessary.
Keep the report current
Neuropsych reports are generally current for 2–3 years for most purposes. Re-evaluation is worth considering at transitions (middle school, high school, college) or when presentation shifts significantly. Some testing bodies require reports within specific time windows for accommodations.
Part 9: When to get a second opinion
Sometimes a second opinion is warranted:
- The report’s conclusions don’t match what you and teachers observe daily
- Diagnoses feel wrong, incomplete, or missing
- Recommendations are generic or unhelpful
- You suspect a profile (2E, autism, anxiety) the evaluator didn’t fully explore
- The school is contesting the evaluation and you want corroboration
- The evaluator won’t meet with you or the school after delivery
- Significant time has passed and the child’s presentation has shifted
A second opinion from a different subspecialty lens (pediatric dyslexia specialist, ADHD specialist, autism specialist) often surfaces profiles the first evaluator missed — especially common in 2E kids and subtle presentations.
NYC public school families have a specific legal right: if you disagree with the DOE’s evaluation, you can request an Independent Educational Evaluation (IEE) at public expense.
Part 10: Finding a quality neuropsychologist in NYC
Credentials to look for
- PhD or PsyD in clinical, school, or educational psychology
- New York State licensure
- Board certification in clinical neuropsychology (ABPP/ABCN or ABPdN) — not required but a meaningful signal
- Specialization in pediatric neuropsychology
- Experience with your specific concern (dyslexia, ADHD, autism, 2E)
Settings where good NYC neuropsychs work
- Hospital-affiliated centers: NYU Child Study Center, Columbia University Medical Center, Mount Sinai, Weill Cornell. Longer waits, institutional quality control.
- Private practice: large ecosystem of solo and small-group practitioners. Quality varies.
- Specialized centers: Child Mind Institute, Churchill Center, and other learning-specialist practices offer evaluations with specific subspecialty focus.
Finding referrals
- Your child’s pediatrician — pediatricians increasingly have trusted networks
- Your child’s school — learning specialists know who does quality work
- A trusted tutor or educational consultant — tutors read many reports and know which evaluators produce actionable ones
- Parent networks: Parents League of New York, school parent associations, local groups
- Organizational directories: American Academy of Clinical Neuropsychology, New York State Psychological Association, Decoding Dyslexia NY
Cost and insurance
NYC private neuropsych evaluations typically run $4,000–$8,000. As covered in our dyslexia guide, New York State’s Dyslexia Diagnosis Access Act — effective January 2025 — requires private insurance to cover neuropsych testing for dyslexia when medically indicated. If your referral question centers on dyslexia, start by asking your insurance about coverage before paying out of pocket.
For other conditions, coverage varies. Ask the evaluator’s office about billing and insurance support — this is a real differentiator among practices.
A final word on the right frame
We started this guide with a claim: the evaluation is a teaching tool, not a label. We want to end with it.
The parents who get the most out of a neuropsych evaluation are the ones who hold this frame. They don’t get stuck on the diagnostic categories. They don’t treat the report as a verdict. They use it the way a good tutor uses it: as the most detailed, specific, useful guide to their child’s particular mind that they’ll ever receive.
The cognitive profile doesn’t change who your child is. It reveals how they learn. That information, in the hands of a teacher who knows how to use it, is transformative.
Central Park Tutors can help
Our tutors regularly work with students whose families have been through the neuropsychological evaluation process. When a family shares a report with us, we read it carefully and come to the first session calibrated to the student’s profile — strengths to build from, weaknesses to scaffold, specific interventions the research supports, behavioral observations to watch for.
Our literacy specialists are trained in Orton-Gillingham and read CTOPP-2 and WIAT-4 findings with fluency. Our executive function coaches work directly with BRIEF-2 profiles.
If you’ve just received an evaluation and aren’t sure what to do next, we can help you think through what the report actually means for instruction, and match your child with the tutor whose approach best fits their profile.
Contact us to discuss your child’s situation.
Related reading
Central Park Tutors has been helping NYC families with academic support and learning-difference specialization for more than twenty years. Recommended by The New York Times. This article is educational and doesn’t substitute for the clinical judgment of the neuropsychologist who wrote your child’s report. If you have questions about your specific child’s evaluation, please call the evaluator. Get in touch with us if we can help.