Graduate Course – School Psychology
Jim Persinger, PhD
Emporia State University
Course: Foundations of Assessment is an introductory graduate course providing familiarity with assessment principles in K-12 school settings, as undertaken to support students in general education as well as to determine needs and eligibility in special education and gifted programs.
Student population: Course enrollment averages 28 graduate students who are candidates in school psychology, adaptive education, and gifted education programs.
Course learning goals/outcomes:
- Assessment procedures for kindergarten through high-school aged students in educational settings, with a broad range of performance levels from mild to severe disabilities, as well as gifted.
- Multicultural and bias issues and other factors that contribute to error in assessment.
- Ethics and legal obligations as relate to the assessment process.
- Theory, research and practical understanding of the following:
a. Steps in the evaluation process required by the Individuals with Disabilities Act.
b. Test scores and their meaning.
c. Selection of appropriate and valid instruments given the presenting problem.
d. Assessment of various domains including intelligence/aptitude, achievement, language, social/behavioral, adaptive, motor, and general development.
e. Assessment using various tools such as norm-referenced tests, criterion-referenced tests, functional behavioral assessment, systematic observation, informal/authentic assessment such as portfolios, and other procedures.
Assignment learning goals/outcomes:
- Know the common types of validity and reliability
- Create examples of each type in a unique assessment context
- Identify strengths and weaknesses for types of evidence provided to support claims of validity and reliability
- Evaluate tools using validity and reliability evidence using the Mental Measurements Yearbook (MMY).
Description of the assignment:
The first assignment uses MMY to evaluate an instrument that assesses a domain of interest to the student and will serve to provide examples of validity and reliability evidence. This information will hopefully inspire candidates to create their own examples of evidence in an idiosyncratic context in the second assignment.
For the first assignment, the student is to identify an assessment that allows them to solve a problem or answer a question for specific domain and population of interest. The student is then asked to provide its technical data (e.g., validity and reliability evidence, reviewer’s impressions) and evaluate the tool’s usefulness in a specific assessment context using the following outline:
- Title of the instrument
- Exact age-range for which this instrument may be used
- Information on one type of validity information provided for this test
- Information on the type of reliability provided for this test
- Brief (2-3 sentences) description of the normative population or the criterion from which the test scores were created
- Brief paragraph (4-5 sentences) describing a reviewer’s impression of the instrument including its strengths and weaknesses
- Describe (in 100 words or less), based upon the information available in your resource, whether this is an instrument you might use to answer question or address the problem you described at the beginning of this assignment.
Based on what the student learns from using MMY to evaluate an existing instrument, the second assignment asks the student to then create a fictitious instrument for which they provide proper technical information. They then share the instrument with the class to convince their classmates that they have a good tool. The topic of this fictitious tool does not need to be related to the tool evaluated in the first assignment.
View/Download First Assignment Description (PDF)
View/Download Second Assignment Description (PDF)
Prerequisite knowledge needed to complete the assignment:
The course draws on content from various domains including intelligence/aptitude, achievement, language, social/ behavioral, adaptive, motor, and general development. Understanding of test scores and their meaning, and selection of valid tools is reviewed. Multicultural and bias issues and other factors that contribute to error in assessment are key components as well as ethics and legal obligations as related to the assessment process.
Tools such as norm-referenced tests, criterion- referenced tests, functional behavioral assessment, systematic observation, and informal/authentic assessment such as portfolios are included. The course textbook does not get into the depth of the discussion I think is needed to help students discern between higher and lower quality tests that is needed for these assignments so I provide additional discussion about validity and reliability in my lectures. These lectures cover not just content but numerous examples to illustrate why the evaluation of technical evidence is important.
Grading of the assignments:
A completion grade is provided for both assignments. For the first assignment, I work with students on that assignment until they pass it so they have the foundational knowledge they need to complete the second assignment. For the second assignment, I use a rubric to grade their participation in the discussion of their classmates’ fictitious instruments.
View/Download Discussion Rubric (PDF)
Student response and value of the assignment for students:
The assignment has proven one of the most popular in our graduate program, as it allows them to demonstrate an understanding of the MMY and technical characteristics of instruments, while in a fun and creative context.