Fall Conference

Registration for the 2022 Fall conference is now live!

If you'd like more information on the sessions, including topics and speakers 
click here.

Register



Use of Multiple Disabilities Category

Dear Assessment Committee,

Could WSASP provide any guidance for the appropriate application of the eligibility category of Multiple Disabilities?

Dear School Psychologist,

Per WAC 392-172A-01035 and IDEA SEC. 300.8 (C) (7): “Multiple disabilities means concomitant impairments, the combination of which causes such severe educational needs that they cannot be accommodated in special education programs solely for one of the impairments. The term, multiple disabilities, does not include deaf-blindness.”

Two words from this definition may be worth noting in particular. The word concomitant means “accompanying”, especially in a subordinate or incidental way. One example may be chromosomal abnormalities that affect several aspects of a child’s development.

The word severe in this context implies the combined impact of the disabilities creates an intense, serious educational need. This means the student’s educational needs must be severe enough that they cannot be addressed by providing special education services for only one of the impairments. In the federal definition of the Multiple Disabilities category, intellectual disability-blindness and intellectual disability-orthopedic impairment are listed as examples.

IDEA notes that the purpose of evaluating in all areas of suspected disability is to produce a report that is “sufficiently comprehensive to identify all of the child's special education and related services needs, whether or not commonly linked to the disability category in which the child has been classified.” (34 C.F.R. ยง 300.304(c)(6)). Looking at how the law has been interpreted in the past may also be helpful in understanding this category’s application. In a response to a citizen’s complaint in 2018, OSPI detailed that “Eligibility under the category of multiple disabilities does not negate the existence of an Autism disability; it just means that the Student has more than one disability.” (SECC NO. 18-71). That is to say, in cases where a student’s needs are being encompassed by their IEP, this category may simply indicate that there is more than one category that may impact the child's learning. A hearing out of New York completed by the Office of State Review put it in similar wording, “At this juncture, when the student's eligibility for special education is not in dispute, the significance of the disability category label is more relevant to the local educational agency and State reporting requirements than it is to determining an appropriate IEP for the individual student”. (NYSED SRO, NO. 20-138). Though it is important to consider a student’s need when the team decides on a special education eligibility category, the category is not as important as ensuring that the evaluation is sufficiently comprehensive to cover all areas of potential need for a student to receive FAPE.

Looking at the National Center for Educational Statistics, the category of Multiple Disabilities accounts for 133,000 students being served in the United States or about 1.9% of all students that are served in special education. Or, alternatively, looking at the relative incidence rate, when compared to enrollment of all students, about 0.9% of all students will qualify under this category. The placements that serve those identified in this category, based on the 2019-20 data, accounted for the fourth highest placements in separate facilities, behind only categories accounting for visual or hearing impairments. Similarly, the data also indicates that this category has the highest prevalence of all the disability categories in the “Homebound/Hospital” placement. In the school setting, the category also accounts for the second largest number of students, outside of Intellectual Disability, that were included in the general education setting for 0-39% of their school time. Generally speaking, one could extrapolate that the use of the category is relatively rare and that the most common students in this category are often associated with a relatively low percentage of time in general education.

It is important to look at the holistic perspective of a student's needs when determining the appropriate eligibility category. In meeting a student's needs, the team should always put the category second to ensure that the student's right to FAPE is met by their IEP. In some situations, a category may be impactful in demonstrating that an area of eligibility or category was considered for a student in determining their needs. School Psychologists would seem best served by guiding teams towards a robust discussion of which areas most impact a student, and consider how a category may indicate the needs that were considered in serving that student's education best.

References:

Frequency of Cognitive Assessment

Recently, the WSASP Assessment Committee has started a monthly "Dear Assessment Committee" column, which responds to relevant questions that are submitted by school psychologists across the state. For additional responses and/or to submit a question, please visit the website:

The following article is a reply to a member’s question about best practices for how often teams should complete cognitive assessments during reevaluations.

Considerations for the Frequency of Cognitive Assessments: A Dear Assessment Committee Article

Leayh Abel, Ed.S., NCSP
Assessment Committee Co-Chair

IDEA states that students must be reassessed every three years in special education, though evaluations can occur more frequently based on requests from stakeholders such as the parents or the school team. An initial evaluation, completed to determine a student’s eligibility for special education, may commonly include cognitive testing as one part of a comprehensive evaluation. Testing for special education can serve a variety of purposes, including identifying areas of service for a student or certain eligibility categories. Less often discussed is how frequently and at what intervals during reevaluations cognitive testing should be completed again. If a student is, for example, first evaluated in second grade, reevaluations would legally occur a minimum of three times before they finished their education. In cases such as this, what is the best practice for assessing cognitive skills again when completing reevaluations?

Assessment planning is individualized based on student need and referral questions for the evaluation. If the team is considering readministering a cognitive measure, what are the individual variables that are relevant to be considered? There is little established best practice that has been communicated on this subject, though there are often anecdotal rules that can often be found. School districts may sometimes have arbitrary answers such as ‘every other evaluation’ or ‘once in elementary and then again in high school’, though often without specific reasoning for this rule.

Potential guidelines are more complicated than that, as they hinge on many factors that should be considered by the practitioners and the team. A recent NASP (2020) publication addressing how to navigate assessment during the COVID-19 pandemic reiterated that for a reevaluation “Standardized assessments are not required by law” and that “Educators should only administer assessments if instructional data and observation indicate that the results of any of the assessment may have changed or if additional data is needed to supplement observation or other forms of data.” Standardized testing is only one part of the evaluation process for students, and the team should consider other components when determining appropriate assessment approaches.

To guide team considerations, the following questions may be considered when approaching a reevaluation that could include cognitive testing. Please note that these are not inclusive to all circumstances, but rather just a guideline to considering evaluation updates. One important area not fully explored in this article is testing considerations for emerging bilingual students or the noted differences that exist for persons and communities of color (Council of National Psychological Associations for the Advancement of Ethnic Minority Interests, 2016). Another area that is not fully explored is the impact to validity as a result of alterations to testing parameters due to the COVID-19 pandemic. Best practices for cognitive testing remain a complicated and expansive topic, and the WSASP assessment committee hopes to continue to explore many different contexts for this topic in the future.

Considerations:

How much time has passed between testing, and what test was used?

An important area that should be considered by all practitioners when examining cognitive testing is the reason for such testing in the first place. Cognitive testing demonstrates strong predictive validity due to the ability to highly correlate with long-term academic outcomes or adaptive behaviors (Krantzler et al., 2020). Despite this strong association, there are potential hurdles to the accuracy of cognitive testing over time, including impacts from the Flynn effect. The Flynn effect refers to the tendency of measurements of cognitive scores to rise over time. The effect posits that there is a tendency for scores on previously normed tests to inflate cognitive scores when compared to more recently normed tests. Based on estimates from research completed by Flynn, scores may increase on average by 0.3 points per year. This is why general guidelines advise that professionals not utilize a test that is more than 10-15 years from the most recent norms. A meta-analysis on the Flynn effect (2014) concluded, “when individuals are tested near the release of a newly normed assessment, the difference in IQ scores produced by the newer test and the older test would indicate that the individual is performing more poorly than what earlier testing may have suggested.” Further, the meta-analysis proposed that this may impact a student when “an individual is assessed at two different sites (e.g., when a child moves and is assessed in a different school district), it may be possible for the child to have completed the newer version of a test first, especially if the assessments are occurring near to the release of a newly normed assessment. In this case, the IQ score produced by the second assessment may be particularly inflated due to both the Flynn effect and prior exposure. This child may be more likely to receive a diagnosis of a learning disability during this second assessment…” (Trahan et. al, 2014).

Another area to consider is the type of testing that was previously completed for a student. Krantzler et al. (2020) noted limitations for testing completed with nonverbal testing, stating that “When using nonverbal intelligence tests, examiners must note that, as a whole, they tend not to predict academic achievement as well as verbal tests-for all groups” (p. 343). Though predictive validity with nonverbal assessments is generally considered to reach an acceptable use for evaluations, the reduction of language loading does limit the number of cognitive areas that the test claims to measure (McCallum, 2013). The reduction in linguistic loading is a trade-off, as it does allow the testing to be generally valid with students who have limited English proficiency (LEP), as well as those from diverse cultural backgrounds. However, due to the limitations of the predictive validity, Krantzler et al. (2020) suggest “recommendations and interventions based on the results of nonverbal tests for children with LEP always should be tentative and short-term (no longer than 1 year at most)” (p. 343). As stated above, the full implications of emerging bilingual students will not be fully explored in this article. Rather this area is intended to serve as an example of the many factors that should be considered by practitioners and teams when assessment planning for a student’s reevaluation.

Are there reasons for the team to expect the results of another cognitive assessment to display different information?

Reviewing the barriers to completing valid and reliable cognitive testing is an important part of the evaluation process before administering any intelligence test. Despite this, the full breadth of a student’s information is not always readily available to examiners before administering measures, which may lead to inaccurate testing. Though immediate impacts to validity should be noted for testing, future testing may benefit from the knowledge of situations such as an undiagnosed medical condition or information that was unknown during initial intelligence testing. In such instances, later evaluations may be able to reflect differently on what the most appropriate assessment for a child may be, or make changes to administration style (e.g., time of day of administration, use of frequent breaks, etc.) to produce more valid results.

Practitioners should always aim to minimize potential confounds by examining existing data (e.g., a child’s hearing and vision screenings or medication usage). However, there may be a limit to what conditions are under the evaluator’s purview in these instances. As an example, students who are prescribed certain medications may see an impact on their overall performance on a cognitive assessment (examples may include drugs such as anti-seizure medications, etc). Later evaluators may note that medication with relevant side effects has been discontinued or changed since their last evaluation, which may indicate that updated testing should be completed. This article cannot create a wholly inclusive list that encompasses all of the varied reasons that new information may be gained from testing. In order to explore each student’s unique circumstances, practitioners should review all available information from previous assessments to gain a clear perspective on if updated testing is likely to display significantly different results.

What additional factors may be impacting a student currently?

Cognitive measures examine the rate of development compared to peers of the same age. Therefore, updated testing may prove useful in instances where students may display different impacts to their abilities over time as they grow and develop. In these instances, updated testing could prove to be helpful as it may show the difference in their cognitive development compared to their same-aged peers, or even relative to a student’s previous abilities in certain cases. Developmental information may play an important role in this process, as screening for information such as occurrences of Traumatic Brain Injury (TBI) or other progressive or acquired medical conditions are a part of looking for confounds that might be impacting cognitive abilities. Examples could include things like major medical episodes, such as childhood chemotherapy exposure, which has been noted in multiple publications as having the possibility to cause ‘later effects’ in cognitive development including a significant decrease in full-scale cognitive scores. In order to consider potential impacts from additional factors, teams should aim to review updated medical information and file reviews, which may assist in determining the need for additional testing.

What are the student’s needs outside of the school setting?

Special education evaluations are not obligated to provide testing for parents and families outside of considerations for school-based eligibility and development of an IEP. However, students’ needs outside of school often intersect with their school-based experiences and should be thoughtfully reviewed when approaching any evaluation. Cognitive testing may be used in a multitude of different instances outside of the school setting, including for medical diagnoses, treatment planning, or even in determining access to social services. In Washington State, the use of only certain cognitive assessments will be accepted by the Developmental Disabilities Association to meet the criteria for receiving social services. Teams may consider this factor as it may limit a student’s ability to access external support that could help them in the school setting as well. Similarly, the College Board has certain requirements for students to be able to receive accommodations for instances such as the SAT or AP testing. Though it is not incumbent upon the school districts to provide updating testing for these instances, they represent some of the potential needs that students or families may have for updated testing in the wider world, and these are relevant areas to be considered by teams when reviewing evaluation needs for a student.

Conclusion:

Overall, outside of changes due to external factors, the long-term stability of cognitive scores has been well-established in the literature, with cognitive scores typically stabilizing in early elementary school (Krantzler et al, 2020). While students may experience growth between early education and later adolescence, this growth is still relative to same-aged peers. Even with this consideration, there are many reasons that updated testing may prove helpful to a team or a family. To comply with the NASP Practice Domains (2020b), it is important that the practitioner reflects on Domain 1: “School psychologists understand and utilize assessment methods for identifying strengths and needs; for developing effective interventions, services, and programs”. Though this requirement is not unilaterally met by completing updated cognitive testing, or by gathering data from any one method, it is important to consider the role testing may play for any student. A thorough review of the facts and circumstances related to a student’s evaluation will serve practitioners best as they proceed through the reevaluation process.

References:

  • Council of National Psychological Associations for the Advancement of Ethnic Minority Interests. (2016). Testing and assessment with persons & communities of color. Washington, DC: American Psychological Association. Retrieved from https://www.apa.org/pi/oema
  • Flynn JR. The mean IQ of Americans: Massive gains 1932–1978. Psychological Bulletin. 1984a;95(1):29–51.
  • Kranzler, J. H., Floyd, R. G. (2020). Assessing intelligence in children and adolescents: A practical guide for evidenced-based assessment. Rowman and Littlefield.
  • McCallum, R. S. (2013). Assessing intelligence nonverbally. In Geisinger, K. F., Bracken, B. A., Carlson, J. F., Hansen, J. I. C., Kuncel, N. R., Reise, S. P., Rodriguez, M. C. (Eds.), APA handbook of testing and assessment in psychology, vol. 3: Testing and assessment in school psychology and education (pp. 71–99). American Psychological Association.
  • National Center for Learning Disabilities, National Association of School Psychologists. 2020. Navigating Special Education Evaluations for Specific Learning Disabilities Amid the Covid-19 Pandemic. National Center for Learning Disabilities, National Association of School Psychologists: https://www.ncld.org/wp-content/uploads/2020/11/Navigating-Special-Education-Evaluations-for-Specific-Learning-Disabilities-SLD-Amid-the-COVID-19-Pandemic.pdf
  • National Association of School Psychologists. (2020b). The professional standards of the National Association of School Psychologists. Bethesda, MD: Author.
  • Trahan, L. H., Stuebing, K. K., Fletcher, J. M., & Hiscock, M. (2014). The Flynn effect: a meta-analysis. Psychological bulletin, 140(5), 1332–1360. https://doi.org/10.1037/a0037173

Washington State Association of School Psychologists
816 W. Francis Ave #214
Spokane, WA 99205
contact@wsasp.org
509-724-1587

Powered by Wild Apricot Membership Software