Cognitive assessment of students typically involves administering one or more standardized, norm-referenced tests but also involves recognition of the importance of multiple data sources; that is, cognitive assessment provides only one source of data to inform classification decisions, such as Specific Learning Disability (SLD), Intellectual Disability, and Gifted and Talented. Information gathered from cognitive assessment includes current levels of functioning in multiple cognitive domains such as accumulated knowledge, reasoning, working memory, and many others. In addition, cognitive assessment data provide insight into how a student learns and why he or she is struggling academically (e.g., Compton, Fuchs, Fuchs, Lambert, & Hamlet, 2012; Sanders, Berninger, & Abbott, 2017). This information assists in formulating recommendations, instructional strategies, and interventions when integrated with other critical data sources (e.g., behavioral observations of student’s approach to solving problems and answering questions; information gathered through testing limits, such as whether problems are solved correctly without time constraints; information gathered through interviewing parents, teachers, and the student; educational, social, and medical history).
After a review of the literature and consultation with experts, it is LDA’s position that psychometric theory (e.g., Cattell-Horn-Carroll or CHC) and neuropsychological constructs (e.g., learning and memory, speed and efficiency) and accompanying research should be at the center of all cognitive assessment activities because they drive test selection, test interpretation, problem solving, and intervention (e.g., Schneider & McGrew, 2018; Hale, Chen, Pan, Fitzer, Poon, & Boyd, 2016). Best practices in cognitive assessment come from the application and use of a systematic, comprehensive evaluation and interpretive framework that integrates empirically supported theoretical and psychometric principles and that is nondiscriminatory (Ortiz, 2014). Issues related to measurement, validity, interpretation, and intervention are accommodated within the framework, precisely because theory is applied and is central to the process (e.g., Carroll, 1993; Haier & Jung, 2018).
Use of standardized, norm-referenced cognitive tests may not be necessary for every student who experiences learning difficulties. Instruction and intervention within a response to intervention (RTI) framework may be sufficient for remediation of specific academic skill deficiencies, but not for the identification of SLD. When high-quality instruction and evidence-based intervention are not effective, the question as to whether the student’s failure to respond to intervention is or is not a manifestation of SLD remains unanswered. It is the position of LDA that current theory-based cognitive batteries provide useful information that aid in answering this question.
Cognitive tests assist in determining whether a student suspected of having an SLD has a disorder in one or more basic psychological processes, which is a component of the Individuals with Disabilities Education Improvement Act (IDEIA) 2004 definition of SLD and a necessary criterion for the accurate identification of SLD. Once a decision is made to use cognitive tests, the process of assessment is guided by theory through knowledge of the network of validity evidence that exists in support of the structure and nature of abilities and processes within the theory. For example, there is a large body of research on the relationships between cognitive abilities and processes (specified by theory) and specific academic skills (e.g., Johnson, Humphrey, Mellard, Wood, & Swanson, 2010; McDonough, Flanagan, Sy, & Alfonso, 2017; McGrew & Wendling, 2010; Miller & Maricle, 2019). The correspondence between weaknesses in academic skills and related cognitive processes together with strengths in cognitive abilities and processes is a common pattern of performance in students with SLD (Journal of Psychoeducational Assessment, 2016; Learning Disabilities: A Multidisciplinary Journal, 2014). Best practices in cognitive assessment include five steps that take place within the context of the case conceptualization of a student who is referred for a suspected SLD.
Step 1: Specify hypotheses. Gathering data to guide the cognitive assessment process begins with specifying hypotheses about extrinsic causes or explanations for the observed academic difficulties, which are referred to as exclusionary factors in IDEIA (e.g., insufficient opportunity to learn, inappropriate instruction, cultural and linguistic differences) (See NJCLD, 2011). If such factors have been ruled out via a rigorous pre-referral process (e.g., through RTI, interviews with parents and teachers), new hypotheses may be informed by relations between cognitive processes and academic skill acquisition and development. For example, when a student with reading, math, writing, or language difficulties fails to respond adequately to interventions that have been implemented with fidelity, one may hypothesize that such failure is due to a disorder in a basic psychological process. Note that when a student fails to respond to intervention, many hypotheses may be generated based on the available data, some of which would be tested through means other than standardized cognitive tests (e.g., mismatch between student and interventionist, inappropriate instructional materials, inaccurate measurement of progress, social/emotional difficulties).
Use of a hypothesis-driven approach “forces consideration of research and theory because the clinician is operating on the basis of research and theory when the hypothesis is drawn” (Kamphaus, 1993, p. 167). Therefore, when case history data and current information are combined with knowledge of theory and research (as well as information from other fields such as learning disabilities and special education), defensible connections between academic achievement and cognitive processes can be made (e.g., Hale & Fiorello, 2004 ). Consider the case of reading difficulties. Knowledge of theory and research assists in identifying the most salient cognitive processes related to reading achievement (e.g., phonological processes, successive processing or working memory, rapid naming). Likewise, knowledge of theory and research assists in identifying cognitive processes related to math (e.g., number sense, working memory, rapid retrieval of math facts, attention, processing speed, reasoning, planning) (Decker & Roberts, 2015).
Step 2: Measuring theoretical domains.
In Step 1, theory and research provided a foundation for specifying relationships among and between cognitive processes and academic skills that could be tested via a hypothesis-driven approach. In this step, tests are selected that measure these cognitive processes and academic skills. Researchers and practitioners have made this task relatively straight forward. For example, hundreds of (sub)tests on cognitive instruments have been classified according to CHC theory (Flanagan, Ortiz, & Alfonso, 2017) and supported by research (e.g., Niileksela & Reynolds, 2019). These classifications are useful in the test selection process to address referral concerns and interpreting test performance. Likewise, information about neuropsychological processes is available to guide test selection and interpretation (Miller, 2010).
Step 3: Administer and score tests. This step involves administering and scoring the tests selected to address the reason for referral. Administration and scoring of all tests should be conducted in accordance with publisher guidelines.
Step 4: Interpret results within the context of all data sources to evaluate hypotheses and draw conclusions. Although each cognitive, achievement, or other ability test typically provides its own system and criteria for evaluating the meaning of test performance, especially regarding classification or descriptions of performance, qualified professionals should not lose sight of the importance of evaluating performance against a normative standard. The normal distribution “has very practical applications for comparing and evaluating psychological data in that the position of any test score on a standard deviation unit scale, in itself, defines the proportion of people taking the test who will obtain scores above or below a given score” (Lezak, 1976, p. 123).
In this step, hypotheses about whether performance is within or outside the typical limits of functioning are tested. Based on the evaluative judgments derived from normative comparisons, qualified professionals should decide whether the data suggest that hypotheses are supported, and therefore retained, or are not supported, and therefore rejected. Note that performance that falls outside and below the typical range of functioning should not be used as prima facie evidence of dysfunction. Rather, the hypothesis-driven approach leads only to the conclusion that performance is not within typical limits because it begins a priori with the hypothesis that performance will be within typical limits to avoid confirmatory bias. The presence of a disability is one possible reason of many (e.g., lack of motivation, anxiety, poor instruction, cultural and linguistic differences) for the patterns of performance observed in the data. Support for any hypothesis related to deficient performance must be established based on convergent data culled from a wide variety of sources and should never be based solely on the results of standardized testing.
When cognitive assessment data are interpreted and evaluated according to a priori hypotheses, there are two possibilities with respect to the results. First, it is possible that functioning in all areas measured according to theory fall within typical limits or higher. If the cognitive processes and academic skills were represented adequately and measured appropriately, then it can be reasonably concluded that there are no weaknesses in functioning. This is not to say, however, that no disability exists. Rather, it is only an indication that standardized test data do not indicate that performance is deficient. Evaluation of standardized test data in isolation is problematic because other data sources gathered through other methods also provide relevant information upon which to base conclusions about disability.
The second, and perhaps more probable, outcome from a cognitive assessment of a student suspected to have an SLD is that one or more areas of cognitive processing are outside and below typical limits. This outcome is likely because the referral process itself is selective and decisions to conduct an evaluation are generally based on evidence of problematic performance (e.g., poor grades, failure to respond to scientifically based interventions). Other than efforts directed at identifying gifted individuals, cognitive assessment usually revolves around determinations of dysfunction or disability. When data suggest that performance cannot be construed as within typical limits, then the null hypothesis is rejected, and the evaluator may either conclude that a disability exists when supported with convergent data or specify additional hypotheses. When the data provide contradictory, ambiguous, or insufficient evidence upon which to base a finding of a disability, the process of theory-guided assessment becomes iterative due to the need to specify and test a posteriori hypotheses through additional assessment (e.g., Flanagan, Ortiz, & Alfonso, 2013).
A posteriori hypotheses are constructed in the same manner as a priori hypotheses. The assessment process proceeds much the same as before, returning eventually to step 4. This iteration in assessment assists in “narrow[ing] down the possibilities” or reasons for the existence of a particular finding (Kamphaus, 1993, p. 166) and can be continued until all hypotheses are properly evaluated and valid conclusions may be drawn. At this point in the assessment process, the meaningfulness of the conclusions drawn from standardized test data can only be realized fully when such conclusions are based on converging data sources.
Step 5: Link performance to intervention, monitor intervention, adjust intervention. Much the same way that application of theory guides how test results are interpreted, so too will it influence the way results are translated into recommendations for interventions. Because the application of theory provides a defensible basis for measurement and interpretation, stronger statements regarding probable causal links and avenues for appropriate remediation and logical intervention can be made (e.g., Decker, Strait, Roberts, & Wright, 2018; Mascolo, Alfonso, & Flanagan, 2014; Mather & Jaffe, 2016).
It is important to remember that knowledge regarding the probable causes of poor academic performance is half the battle in guiding and informing the development of appropriate recommendations regarding curricular modifications and supports, remedial techniques, accommodations, and compensatory strategies. Without an understanding of probable causes, it is difficult to select and tailor interventions that will address the student’s unique learning needs. For example, remedial instruction for a student with reading difficulties presumably caused by poor instruction and attendance will likely differ from interventions developed for a student with reading difficulties that are presumably caused by phonological processing and working memory deficits.
Decisions regarding the appropriateness or suitability of standardized cognitive tests for any assessment should be based on several factors, including the intended purpose for assessment and referral concerns. Moreover, careful evaluation of individual case history information, consideration and appraisal of data from other relevant sources (e.g., parents, teachers, interventionists), and conceptualizing the student’s difficulties within the context of their unique educational, cultural, and linguistic experiences should be considered. When cognitive assessment is conducted in accordance with the steps outlined here, the greatest utility of cognitive test data is realized, particularly as they apply to SLD classification and intervention planning.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. United Kingdom: Cambridge University Press.
Compton, D. L., Fuchs, L. S., Fuchs, D., Lambert, W., & Hamlett, C. (2012). The cognitive and academic profiles of reading and mathematics learning disabilities. Journal of Learning Disabilities, 45, pp. 79-95.
Decker, S. L. and Roberts, A. M. (2015), Specific cognitive predictors of early math problem solving. Psychology in the Schools., 52, 477-488.
Decker, S. L., Strait, J. E., Roberts, A. M., & Wright, E. K. (2018). Cognitive mediators of reading comprehension in early development. Contemporary School Psychology, 22(3), 249-257.
Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2013). Essentials of cross-battery assessment (3rd ed.). Hoboken, NJ: Wiley
Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2017). Cross-battery assessment software system, version 2.0 (X-BASS 2.0). Hoboken, NJ: Wiley.
Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., Davis, A., Decker, S., …Yalof, J. (2010). Critical Issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disabilities Quarterly, 33, 223-236.
Hale, J. B., Chen, S. A., Tan, S. C., Poon, K., Fitzer, K. R., & Boyd, L. A. (2016). Reconciling individual differences with collective needs: The juxtaposition of sociopolitical and neuroscience perspectives on remediation and compensation of student skill deficits. Trends in Neuroscience and Education, 5(2), 41–51. doi: 10.1016/j.tine.2016.04.001
Hale, J. B., & Fiorello, C. A. (2004). School neuropsychology: A practitioner’s handbook. New York, NY: Guilford Press.
Individuals with Disabilities Education Improvement Act, 20 U.S.C. § 1400 (2004)
Johnson, E. S., Humphrey, M., Mellard, D. F., Woods, K., & Swanson, L. (2010). Cognitive processing deficits and students with specific learning disabilities: A selective meta-analysis of the literature. Learning Disability Quarterly, 33, pp. 3-18.
Kamphaus, R. W. (1993). Clinical assessment of children’s intelligence: A handbook for professional practice. Boston, MA: Allyn and Bacon.
KTEA-III error analysis (2016). Special issue of Journal of Psychoeducation Assessment, 35(1-2).
Learning disabilities: implications for policy regarding research and practice: A report by the National Joint Committee on Learning Disabilities. (March 2011). Learning Disability Quarterly, 34(4), 237–241.
Lezak, M. D. (1976). Neuropsychological assessment. New York, NY: Oxford University Press.
Mascolo, J. T., Alfonso, V. C., & Flanagan, D. P. (2014). Essentials of planning, selecting, and tailoring interventions for unique learners. Hoboken, NJ: John Wiley & Sons Inc.
Mather, N., & Jaffe, L. E. (2016). Woodcock-Johnson IV: Reports, recommendations, and strategies. Hoboken, NJ: John Wiley & Sons Inc.
McDonough, E. M., Flanagan, D. P., Sy, M., & Alfonso, V. C. (2017). Specific learning disorder. In S. Goldstein & M. DeVries (Eds.), Handbook of DSM-5 disorders in children and adolescents (pp. 77–104). New York: Springer.
McGrew, K. S., & Wendling, B. J. (2010). Cattell–Horn–Carroll cognitive-achievement relations: What we have learned from the past 20 years of research. Psychology in the Schools, 47(7), 651–675.
Miller, D. C. (Ed.). (2010). Best practices in school neuropsychology: Guidelines for effective practice, assessment, and evidence-based intervention. Hoboken, NJ: John Wiley & Sons Inc.
Miller, D. C., & Maricle, D. E. (2019). Essentials of school neuropsychological assessment. Hoboken, NJ: John Wiley & Sons Inc.
Ortiz, S. O. (2014). Best practices in nondiscriminatory assessment. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology VI: Foundations (pp. 61-74). Washington, DC: National Association of School Psychologists.
Niileksela, C.R., & Reynolds, M. R. (2019). Enduring the tests of age and time: Wechsler constructs across versions and revisions. Intelligence, 77, pp. 2-15.
Sanders, E. A., Berninger, V. W., & Abbott, R. D. (2018). Sequential prediction of literacy achievement for specific learning disabilities contrasting in impaired levels of language in grades 4 to 9. Journal of Learning Disabilities, 51(2), 137–157.
Schneider, W. J., & McGrew, K. S. (2018). The Cattell–Horn–Carroll theory of cognitive abilities. In D. P. Flanagan & E. M. McDonough (Eds.), Contemporary intellectual assessment: Theories, tests, and issues., 4th ed. (pp. 73–163). New York, NY: The Guilford Press.
Utility of the pattern of strengths and weaknesses approach (2014). Special issue of Learning Disabilities: A Multidisciplinary Journal, 20(1).
Download the PDF of this Core Principle here.
Based on the purpose of the Learning Disabilities Association of America to create opportunities for success for all individuals affected by learning disabilities through support, education and advocacy, LDA’s Core Principles were developed and approved by the Board of Directors of LDA to establish a set of standards and guidelines reflecting the positions and philosophies of our organization.
Adopted: December 14, 2019