Fairness and English Learners: Toward true peer group measurement
April 25, 2018 (Wednesday), 12:00-1:30 CDT
LEARNING OBJECTIVES
- Distinguish construct invalidity from interpretive invalidity
- Articulate the importance of "true peer" comparisons as the foundation for fairness
- Identify the manner in which differences in language exposure affect current test performance for ELs
- Explain how neither age, race, nor first language spoken alone provide the requisite degree of comparability
- Incorporate amount of English exposure as a stratification variable in a test's norm sample design
Abstract
English learner status has not been a variable used to guide test design and construction. Other variables, such as race/ethnicity or native language, have typically been used to create norm samples that give an appearance or representation but without much evidence of actual fairness. Continued observations regarding mean group differences between English learners and native English speakers, particularly on language-based tasks, points to developmental differences in English exposure/opportunity to learn, as the true variable that moderates test performance of English learners relative to English speakers. If current and future tests expect to meet the requirements for fairness as set forth in the 2014 Standards for Educational and Psychological Testing (AERA, APA, & NCME), a significant change of perspective in test development must occur to achieve fairness and equity in testing of English learners. This webinar will provide a discussion of the various design considerations necessary to establish fairness and will present results of their application in the creation of a new test of language (receptive vocabulary) that achieves significantly higher levels of fairness that cannot be met via adherence to many of the traditional procedures currently employed in test development. Only when developmental differences in language and language acquisition are controlled can there be movement toward the type and extent of fairness outlined in the Standards.
Bio
Samuel O. Ortiz, Ph.D. is Professor of Psychology at St. John's University, New York. He holds a doctorate in clinical psychology from the University of Southern California and a credential in school psychology with postdoctoral training in Bilingual School Psychology from San Diego State University. He has served as Visiting Professor and Research Fellow at Nagoya University, Japan, as Vice President for Professional Affairs of APA Division 16 (School Psychology), as a three-term member and in his final year as Chair of APA’s Committee on Psychological Tests and Assessment, as member of the Coalition for Psychology in Schools and Education, as representative on the New York State Committee of Practitioners on English Language Learners and Limited English Proficient Students, and as a member on the APA Presidential Task Force on Educational Disparities. Dr. Ortiz serves or has served on various editorial boards including Journal of School Psychology, School Psychology Quarterly, Journal of Applied School Psychology, Psychology in the Schools, and Journal of Cognitive Education. Dr. Ortiz trains and consults nationally and internationally for various federal, state, regional, and local educational agencies, conducts and supervises research in the schools, and has published widely on a variety of topics including nondiscriminatory assessment, evaluation of English learners, cross-battery assessment, and learning disabilities. He has authored and co-authored numerous articles, chapters, and books including “Assessment of Culturally and Linguistically Diverse Students: A Practical Guide,” “Essentials of Cross-Battery Assessment, 3rd Edition,” and Cross-Battery Assessment Software System (X-BASS v1.0 and v2.0). Dr. Ortiz was elected to membership in the Society for the Study of School Psychology (SSSP) in 2007. Dr. Ortiz is bilingual, Spanish, and bicultural, Puerto Rican.