Valid interpretation and use of social and emotional learning (SEL) assessment results does not end after identifying an assessment that aligns with your local purposes and ensuring the technical quality. It also involves careful consideration of administration and scoring and communicating the results from an SEL assessment. This guide was developed using the 2014 Standards for Educational and Psychological Testing published by the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME) to identify considerations before and after administering an SEL assessment.
Before administering and scoring an SEL assessment
Most SEL assessments have manuals that provide guidance on how to administer and score the assessment appropriately. Use this guidance to develop a formal plan for preparing personnel who will use the assessment. Outline the following considerations in that plan:
Identifying and sharing standardized procedures for administration and scoring.
Standardized administration procedures may involve instructions, time limits, and assessment conditions. Standardized scoring procedures may address how to aggregate item responses or apply a rubric. Follow any required qualifications or training recommended by the assessment developer for administration and scoring and check for correct and consistent administration and scoring across individuals, classrooms, or schools as well as documenting of deviations or disruptions.
If there is a need to ensure the assessment is being administered and scored correctly and consistently across individuals, classrooms, or schools, additional checks for quality control might be called for or documenting of deviations or disruptions.
Availability and use of accommodations.
If students need alterations to administration and scoring procedures to receive full and fair access to the assessment (e.g. students with disabilities or from diverse cultural and linguistic backgrounds), identify valid accommodations documented by the assessment developer. If there is no documentation of a needed accommodation, consult state or school district policies for guidance. Inform students as well as the school personnel administering the assessment about available accommodations and process for obtaining those accommodations.
Security and integrity of assessment materials and scores.
Protect copyrighted materials by not allowing reproduction or re-creation of assessment materials in paper or electronic form without consent of the copyright holder. If results are viewed as consequential, have a plan for securing assessment materials as well as protecting the integrity of scores from fraud or deceit on the part of the respondent or assessment user.
Providing instructions, practice, or other support to test takers.
Inform respondents if the way in which they respond (e.g. guessing or the speed of their responses) could affect their scores. If unfamiliar equipment or software is used in administration (e.g., computer administered assessments), provide respondents practice opportunities with equipment or software unless the use of unfamiliar tools is part of what is being assessed (e.g., problem solving).
Collection of empirical evidence for alterations.
If altering the assessment format, mode of administration, instruction, or language of the assessment, the assessment developer should provide (or alternatively the test user should collect) empirical evidence that those alterations will not affect reliability/precision and validity of score interpretation. Published norms may not be valid under altered conditions if it is determined that changes to the assessment alter the meaning of scores.
Before reporting and interpreting results from an SEL assessment
Test users should only report and interpret results as recommended and empirically supported by the assessment developer. To reach conclusions that validly inform decision-making, keep the following considerations in mind:
Levels at which results are reported.
Report assessment scores only at the level intended and empirically supported by the assessment developer. For example, group vs. individual scores, overall scores vs. subscores, separate scores for subgroups. Never combine scores or separate scores unless recommended and empirically supported by the assessment developer. If reporting subgroup results, individual users familiar with those subgroups should be involved in interpretation and use.
How results are reported.
Ensure reporting of results protects copyright of the assessment materials and the privacy of assessment takers through security and confidentiality of individual scores. Consult the developer’s cautions about the limitations of the scores, norms/comparison groups, and potential misinterpretation and misuse. Report amount of error expected for a score using standard error or confidence intervals to indicate scores are estimates that can vary from one occasion to the next.
Use simple language to describe what the assessment covers, what scores represent, the precision/reliability of the scores, and how to interpret and use scores. If reporting performance categories or labels, clearly and precisely describe the intended inference. Minimize potential biases for assessment takers due to demographics (e.g. cultural groups, age, social class, gender etc.)
Identify supplemental information (e.g., results from other assessments, academic/behavioral data) that would support appropriate interpretation and use, especially if reporting individual-level scores. Indicate how to weigh assessment scores in relation to supplemental information when making decisions.
If using an assessment regularly over time and/or used previously, verify that assessment interpretations remain relevant and appropriate when there are significant changes in the SEL curriculum/instruction, the population of assessment takers, modes of administration, or the purpose of conducting the assessment.
Before communicating assessment results and conclusions to stakeholders
A plan for communicating assessment results to stakeholders can assist in ensuring valid interpretation and use and minimize potential negative consequences. Considerations include the following:
Provide framing information.
Provide assessment name, quotes of the purpose and intended interpretation and use, and cautions about interpretation and use from the assessment developer at the beginning of every discussion of assessment results. If sharing assessment results publicly, accompany those results with enough information about the purpose of the assessment and how to appropriately interpret results to minimize the possibility of misinterpretations.
Anticipating misinterpretations and setting parameters for the discussion.
Anticipate the possibility stakeholders might oversimplify their interpretations of results or misattribute reasons for results. Encourage sound conclusions and decision-making by thinking about these potential issues ahead of time. Before discussing assessment results, use recommendations from the assessment developer to define conversations about the results indicating what topics and conclusions are within bounds and outside of bounds (e.g. assigning meaning to results that were unintended or have no evidential basis).