Undergraduate Course - Psychology

Nina Ventresco, M.Ed.

Adjunct Professor, Psychology
The College of New Jersey
ninaventresco@gmail.com

Assignment Summary

Course: Psychological Testing Seminar is offered by the department of psychology and is highly recommended for students interested in pursuing careers in education, counseling/clinical, or I/O psychology.

Student population: Course enrollment averages 15-20 advanced-level undergraduate students.

Course learning goals/outcomes:

  1. Convey the need for and various uses of psychological tests.
  2. Introduce appropriate sources for finding information about published measures.
  3. Learn the technical underpinnings of test construction and evaluation.
  4. Provide skills needed to evaluate the appropriateness and quality of any measure they might encounter in research and field settings.

Assignment learning goals/outcomes:

Students will…

  1. Familiarize themselves with objective sources of information about published tests.
  2. Apply concepts learned across the semester to a particular test of interest.
  3. Integrate data from multiple perspectives to (a) understand a given test’s strengths, weaknesses, and utility, and (b) provide related recommendations for test use.

Description of the assignment: 

  1. Students choose a psychological test of interest that has both a published review in the Mental Measurements Yearbook (MMY) and other, non-objective information (e.g., website, advertisement, publisher’s website) available. The instructor pre-approves the students’ selected test so students in the course do not review the same test.
  2. Students thoroughly review these sources and develop an opinion on the intended use and intended audience of the test as well as the instrument’s psychometric value.
  3. Students then produce a final paper that is similar in format to an MMY review. They are encouraged to explicitly contrast how the test publisher’s description of the test may differ from the MMY test reviewers’ more objective critique. Assignment sections include:
    a. general information about the test (e.g., publisher, cost)
    b. purpose and nature of the test, practical evaluation (e.g., administration time)
    c. technical evaluation (e.g., appropriateness and quality of validity data provided)
    d. summary of test strengths and weaknesses, as well as recommendations for the test’s use (e.g. Would you recommend this test? If so, to whom and for what purpose?).
  4. During the last few weeks of the semester, students do a brief, formal presentation of their test review findings to the class. Presentation is a synopsis of the paper.

View/Download Assignment Description (PDF)

Prerequisite knowledge needed to complete the assignment:

Course materials include Miller and Lovler’s (2019) Foundations of Psychological Testing, as well as supplementary readings, videos, and podcasts. The first part of the course focuses on the psychometric properties of and construction techniques for psychological tests. Topics covered include appropriate use of tests, survey constructions, computerized testing, test development, and understanding test scores, reliability, validity, psychometric quality of tests. The latter part focuses on specific examples of commonly used tests from education, clinical/counseling, and other applied fields. 

Throughout the semester, three MMY reviews are assigned as mandatory course readings. Specifically, these readings coincide with course lessons focused on the technical properties of tests. For example, one assigned MMY review is intended to demonstrate that even some published, commercially available tests do not have compelling evidence for validity. During class lecture, the students discuss the extent to which they agree with the MMY reviewers as well as what, specifically, test developers might do to respond to the reviewers’ criticisms.

Miller, L. A., & Lovler, R. L. (2019). Foundations of psychological testing. A practical approach (6th ed). Los Angeles, CA: SAGE Publications.

Grading of the assignment:

Students receive feedback from their instructor on the final paper and presentation, as well as feedback from their peers on the final presentation.

In grading students’ final papers, the instructor considers the extent to which the student:

  1. Includes all required elements in each section of the paper (e.g., the “technical evaluation” section must include discussion of the test’s normative samples, reliability, validity, and fairness).
  2. Meaningfully compares and contrasts differing perspectives on the test instrument (e.g., differing reviewer opinions, test publisher versus MMY reviewers).
  3. Accurately and fully presents strengths and weaknesses of the test instrument.
  4. Makes thoughtful recommendations for test use (e.g., in what circumstances and with what populations should this test be used?).
  5. Incorporates concepts from class and one’s own perspective rather than merely summarizing content from the reviewed sources.

In grading students’ final presentations, the instructor and peers consider the extent to which the most critical information from the final paper is presented clearly and succinctly, in a way that others (unfamiliar with the test instrument) can understand. Students also receive feedback on their oral presentation skills, more generally (e.g., time management, public speaking, visual presentation of information).

View/Download Grading Rubric (PDF)

Student response and value of the assignment for students:

I have found that students benefit tremendously from the concise, yet information-dense MMY reviews they read as they complete their papers. In particular, the reviews help students to interpret the results of statistical analyses that we don’t thoroughly cover in class (e.g., confirmatory factor analysis). It is also helpful for them to see that whereas reviewers generally tend to agree, they sometimes have differing opinions about a given test’s technical properties, practicality, or overall utility; the field of psychological testing is rarely black and white.

Students report liking this opportunity to serve as the “expert” on a given test and enjoying the exposure (through their peers’ presentations) to a wide variety of test instruments. Many students choose a test that is relevant to their career path (e.g., education, business, mental health), providing a connection between course material and the “real world”.

Overall, I am consistently impressed by the students’ high-quality test review papers, as well as by the feedback they offer one another on their presentations (e.g.,“I agree that the test developers should have included toddlers within their normative sample!”). By the end of the semester, it is clear they have developed a more critical eye when it comes to selecting and evaluating tests. In fact, one student emailed me six weeks after the semester ended to discuss the “poorly designed” evaluation instrument in use at her internship site. Student feedback like this serves as further evidence that course learning objectives have been met.