Graduate Course – Clinical Psychology
Assignment Summary
Carl Isenhart, PhD
Clinical Assistant Professor, School of Medicine
University of Arizona Phoenix
Course: Evaluation Methodologies is a graduate course that examines an array of evaluation methodologies to assess the effectiveness and efficiency of a range of clinical interventions, including individual and group therapy and programmatic services.
Student population: The course averages 20-30 students enrolled in the Counseling and Psychological Services Master’s degree program at St. Mary’s University of Minnesota, and most of those students are in their second year of study.
Course learning goals/outcomes:
The course is focused on critically evaluating and utilizing information from existing research literature, implementing and evaluating best practice guidelines and evidence-based practices (EBP), and planning original evaluation projects, which includes evaluating psychological assessment instruments. Specifically, the course goals include:
- Evaluate the outcome research literature to evaluate the effectiveness and efficiency of clinical interventions.
- Locate, evaluate, and implement evidence-based practices (EBP) for the assessment and treatment of mental disorders.
- Develop and implement single-subject (N of 1) research studies.
- Develop and implement program evaluation studies.
Assignment learning goals/outcomes:
Goals for the assignment are to teach students:
- The essential components of a well-constructed test.
- How to evaluate those essential components.
- Instill a healthy skepticism of the selection of psychological tests, especially when there is incomplete or obfuscated information about a test.
Description of the assignment:
The assignment is a three-part assignment that sets the context for the students to use a published framework of 13 dimensions to grade a psychological assessment. To select the assessments that the students will grade, they are asked to think about some current aspect of their program that they want to assess or some intervention they might want to implement in their current programming. It could be a client outcome, staff training, etc. After identifying the outcome and implementation goals of the evaluation and the evaluation design in the first two assignments, in the third assignment students identify and describe two data sources they will use to collect data and justify the selection of a test based on how that test scored on the 13 dimensions framework.
These 13 dimensions cover the psychometric strength of the test, scope of the test, and administration issues. For each of the 13 dimensions, the student determines a score ranging from 1 to 7, for a range of 13 to 91 total points. Higher scores indicate that the test meets criteria for a specific dimension. To guide students through the grading of their identified assessment, they are provided a “Test Grading Form,” which consists of sections that provide instructions and an orientation to the grading process, criteria and operational definitions (based on the article by Erbes, et al, 2004), examples of rating anchors, a rating sheet, a guide to interpreting the overall grade for the test, and a table used to determine if a correlation coefficient (for reliability and validity) is significant at the .01 level for a given N.
Students must use the test manual and supplemental materials provided by the test publisher, the Mental Measurements Yearbooks (MMY), and peer-reviewed journal articles along with a review of the test materials to score the psychological assessment on each of the 13 dimensions. MMY reviews are integral to this process and the MMY is the foundational source students should use when evaluating tests along these 13 dimensions. Students can also use secondary sources (e.g., research articles), but most of the time these additional resources are unnecessary given the MMY’s ease of online access (via the University library’s electronic database subscription) and because most of the information for evaluating the 13 dimensions is addressed by the MMY.
Erbes, C., Polusny, M.A., Billig, J., Mylan, M., McGuire, K., Isenhart, C., & Olson, D. (2004). Developing and Applying a Systematic Process for Evaluation of Clinical Outcome Assessment Instruments. Psychological Services, 1, 31-39.
View/Download Assignment Description (PDF)
Prerequisite knowledge needed to complete the assignment:
To prepare students for the third part of the assignment I cover the following topics in class in addition to content related to program evaluation:
- Overview of test psychometrics.
- Information on selecting psychological test instruments.
- Guidelines for:
a. selecting and developing surveys and questionnaires;
b. developing and running focus groups;
c. conducting root cause analyses;
d. analyzing and interpreting parametric and non-parametric data;
e. communicating results and recommendations to stakeholders.
Source materials I use for this content include:
McDavid, J.C., Huse, I., & Hawthorn, L.R.L. (2013). Program evaluation and performance measurement: An introduction to practice (2nd ed.). Los Angeles: Sage.
Ogles, B.M., Lambert, M.J., & Fields, S.A. (2002). Essentials of outcome assessment. New York: John Wiley & Sons, Inc.
Grading of the assignment:
I grade the students’ scoring of the 13 dimensions by providing a point for each dimension that is addressed and, in the case of dimensions related to psychometric strength and scope of the test, an additional point for whether or not the scoring decision is supported by the data for the test.
View/Download Grading Rubric (PDF)
Student response and value of the assignment for students:
Ultimately every psychologist needs to decide what grade they are willing to accept when selecting a test to use. Although a test with a nearly perfect score on the 13 dimensions would be optimal, given the nature and purpose of the testing program and the lack of any alternatives, it may be acceptable to use a test with a lower score on the 13 dimensions. One benefit of using this grading process is that if a test with a lower score is used, the psychologist can address the test’s short-comings and urge for the necessary caution in interpreting and applying the test data in the clinical setting.