Graduate Course-Educational Research

Assignment Summary

Leanne Ketterlin Geller, PhD

Professor, Educational Policy and Leadership

Southern Methodist University

lkgeller@smu.edu

Course: Measurement and Assessment 1 is an introductory course offered to doctoral students at the beginning of the second year of their program.

Student population: Course enrollment averages 5-10 doctoral students pursuing a degree in educational research


Course learning goals/outcomes:

This course focuses on fundamental theories of measurement and assessment. Students will:

  1. Learn the historical progression of the concept of validity, culminating in a study of the argument-based approach to validity.
  2. Understand the role of evidence in validation and consider appropriateness and sufficiency of evidence in a validity argument.
        a. Understand and know how to design studies to gather sources of validity evidence (e.g., content-related evidence, relation to other variables, response processing).
        b. Understand and know how to calculate indices of reliability.
  3. Learn and apply instrument development principles.
  4. Understand and evaluate the implication of potential threats to validity and reliability.

 

Assignment learning goals/outcomes:

Through this assignment, students will evaluate the validity of a stated interpretation or use of an existing instrument by:

  1. Critically examining the quality of sources of evidence for an existing instrument,
  2. Determining the sufficiency of evidence supporting or refuting the stated interpretation or use, and
  3. Examining the overall plausibility of validity of the stated interpretation or use.

 

Description of the assignment: 

To measure students’ attainment of these skills, students critique an existing instrument that is currently being used or under consideration for a specific purpose within their research assistantships. The critique requires students to engage in a critical exploration to evaluate the appropriateness of the instrument for a specific interpretation or use. Students examine the validity evidence supporting (or refuting) the claims, and ultimately, make a summative evaluation about the validity of the interpretation or use of the test scores.

  1. Students research validity evidence for the specified interpretation or use of the test, beginning with an examination of the information presented in the Mental Measurements Yearbook (MMY).
  2. Students examine the test manual and two research articles about the test. They amass theoretical and technical information about the instrument.
  3. Students evaluate the coherence of the accumulated evidence. They make recommendations for additional evidence or studies that may be needed to further substantiate the score-based claims. Students are encouraged to critically examine the commentary and summary provided in the MMY database and integrate these perspectives in their evaluation.
  4. Students defend their perspective (e.g., the evidence is sufficient or insufficient to warrant using the instrument for the specific purpose) in writing and a class presentation and explain how they arrived at their final judgment. Other students are encouraged to generate counter-claims that could be used to refute the judgment. This discussion is generative and not part of the grading scheme.

 View/Download Assignment Description & Scoring Guide


Prerequisite knowledge needed to complete the assignment

Prior to enrolling in this course, students have already completed introductory courses in qualitative and quantitative research design and introductory statistics and are simultaneously taking courses in advanced quantitative research design and statistics. In this course, we discuss the importance of alignment between the purpose of an existing assessment and its use within research contexts and consider which sources of evidence are needed to justify the use. 

Course readings include the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) and a variety of chapters and articles, including:

  • Brennan, R. L. (Ed.). (2006) Educational Measurement (4th ed.). Westport, CT: American Council on Education/Praeger

  • Cizek, G. J., Rosenberg, S. L., & Koons, H. H. (2008). Sources of validity evidence for educational and psychological tests. Educational and Psychological Measurement, 69(3), 397-412.
  • Kane, M. (2013). The argument-based approach to validation. School Psychology Review, 42(4), 448-457.
  • Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, (18)2, 5-11.


Grading of the Assignment

In addition to the verbal feedback provided by students’ peers, I provide extensive written feedback. Moreover, students are allowed to use both sources of feedback to revise their critique.

 View/Download Assignment Description & Scoring Guide


Student response and value of the assignment for students

I continue to use this assignment to measure students’ attainment of these instructional objectives because of the positive feedback I receive from students and the supervisors of their research assistantships. Students routinely comment that as a result of this assignment they are able to critically evaluate the measures described in research articles and are more attuned to the importance of careful selection of instruments in research designs. Supervisors comment on students’ critical evaluation of instruments.