Fairness Evidence Expectations for Educational Tests
March 28, 2018 (Wednesday), 12:00-1:30 CDT
LEARNING OBJECTIVES
- Discuss fairness as a fundamental issue that transcends all aspects of test development, use and interpretation.
- Recognize that test development practice and procedures can be structured to provide evidence to address fairness issues.
- Review specific strategies that can be implemented to help minimize construct-irrelevance components during test development.
Abstract
The assessment of student achievement continues to be a fundamental and significant source of information in our society. While recognizing that achievement data are powerful, we must always guide towards appropriate use and interpretation. Test designers, developers and researchers must balance the complexity of validating assessments while also recognizing that it is the information from assessments that determines the contribution and value to our students, educators and policymakers. The role of measurement is to provide information that will permit decisions to be as informed and fair as possible. Every aspect of design and development contributes to fairness. From decisions made early in the design process with respect to purposes to be served by an assessment, fairness is a concern. Every decision made concerning the test specifications (for example, balance of content, item types, length of time, alignment, delivery mode, the number of forms, depth and breadth of item pools) also raises issues about fairness. In addition, the design of the research that addresses comparability, reported scores and interpretability directly affects fairness. This session will share considerations and strategies for addressing fairness in an ongoing, iterative process that should be addressed at all stages of development and evaluated in a systematic way.
Bio
Catherine Welch, Ph.D. is a professor of Educational Measurement and Statistics in the College of Education at the University of Iowa. She teaches courses in educational measurement and large-scale assessment to MA and PhD candidates. Areas of research include the design, alignment, and validation of student achievement tests, the evaluation of student growth and readiness, innovative item formats, comparability, and utility of information. In addition, she has recently coordinated thesis research that has focused on item pool efficiencies, innovative item validation, online testing with young students, fairness with respect to college choice, student growth, and readiness modeling. As a co-director of Iowa Testing Programs, a testing center in the College of Education, Dr. Welch is responsible for the design, development, and validation of K-12 achievement tests in the content areas of mathematics, English language arts, science, and social studies. She coordinates both state and national research studies to pilot and field test materials and conducts psychometric and technical analysis to validate both items and test forms. She is responsible for securing research and development funding to support the center’s agenda. This funding typically includes awarded contracts, grants and intergovernmental agreements. Within the state of Iowa, Dr. Welch is the director of the Statewide Testing Programs and coordinates the delivery of the state assessment system to 340 school districts. She supervises professional staff (psychometricians and content specialists) and graduate students in the delivery of assessment information to Iowa students, schools, policymakers, and the public. Prior to joining the University of Iowa in 2007, Dr. Welch was assistant vice president in test development at ACT. She assumed responsibility for six areas including K-12 Assessment Programs, Performance Assessment Center, Scoring Center, MCAT, Classroom Connections, and Program Support. Dr. Welch completed her PhD in Educational Measurement and Statistics from the University of Iowa.