REVIEWER INFORMATION
Reviewing a test is an extremely important professional responsibility with critical implications for test authors, publishers, and users. Invited reviewers are expected to give the test the concentrated attention and care the task demands. Part 1 of this document describes our policies and general conditions that govern the test review process. Part 2 provides guidance on preparing test reviews, such as format and organizational concerns, as well as electronic submission of completed reviews.
Part 1: Policies and General Conditions
Conflicts of Interest
If for any reason a reviewer believes they have a conflict of interest or are otherwise not in a position to write an objective and unbiased review of a particular test, the reviewer should request a substitute test.
Joint Authorship
Joint authorship of reviews is acceptable only with advance permission from the Buros Center. For reasons both legal and commercial, the invited author must be the first author of the review. As such, the invited author retains full responsibility for the content and quality of the review. Undergraduate students may not serve as coauthors. Due to publication and distribution costs, only one MMY is provided to joint authors.
Use of Artificial Intelligence Technologies
Reviews may not be written with non-human methods, such as artificial intelligence (AI) technologies. Such technologies may be used only as a tool to assist in editing written work created by the test reviewer who has control over each word in the review.
Security of Test Materials
Reviewers have the responsibility of storing all testing materials in a safe and secure location. Unless specifically requested for return by the Buros Center, testing materials should be kept until test reviews have been published in the Mental Measurements Yearbook (MMY) series in case questions arise about the content of the review. All test items and test protocols must be treated as confidential. If testing materials are discarded, disposal must occur in a manner that ensures no materials may be compromised to third parties. For additional information, consult the guidance provided by the American Psychological Association’s Committee on Psychological Tests and Assessment.
Major Objectives
Reviews should be written with these objectives in mind: (a) to provide test users with carefully prepared appraisals of test materials that provide guidance in selecting and using tests, (b) to stimulate progress toward higher professional standards of test construction by commending good work, by censuring poor work, and by suggesting improvements, (c) to impel test authors and publishers to provide more detailed information about the construction, norms (as appropriate), validity and fairness evidence, reliability, appropriate uses, and possible misuses of their tests.
Standards
Criteria employed for the evaluation of a test should be those generally accepted and endorsed by the professional community. One very useful source of such criteria is the Standards for Educational and Psychological Testing (2014), which was prepared by a joint committee of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education. This publication can be downloaded at no cost (or purchased) from the American Educational Research Association website and is available from most university libraries.
Accuracy
Please double-check the factual accuracy of your statements against the test materials provided. Quotation marks, page numbers, and reference citations are required elements for all quotations, using the format of the seventh edition of the Publication Manual of the American Psychological Association (2019). Proof copies of all reviews will be sent to the first author for examination.
Criticism
Reviews should be evaluative, giving credit where credit is due, and describing weaknesses with attention to their likely implications and effects. Reviews should be written primarily for the rank and file of test users. An indication of the relative importance and quality of a test will help users choose tests more wisely with respect to competing instruments.
Contacting Test Authors and Publishers
If a test manual gives insufficient, contradictory, or ambiguous information regarding the construction, reliability, validity evidence, normative data, or use of a test, reviewers are urged to contact the authors and publishers directly for further information. That said, test authors and publishers are responsible for presenting adequate data in test manuals; failure to do so should be pointed out. If information not available to a test purchaser is used in the review, the source of that information should be clearly indicated.
References
Reviewers should cite references in their reviews as needed to acknowledge sources properly, but should limit the number cited, double-check all references for accuracy, and make sure all citations in the text have a corresponding reference listed and vice versa. Reviewers should use and cite primary sources only (i.e., avoid “as cited in” references). Otherwise, reference citations and lists should follow the rules and format outlined in the seventh edition of the Publication Manual of the American Psychological Association (2019).
Dual Reviews
To secure a better representation of viewpoints, most tests will have two separate reviews. The editors may delete overlapping, noncritical content in reviews. Reviews will be edited carefully. Substantive changes to reviews will not be made without the reviewer's notification.
Publisher Quotations
Reviewers are advised that test publishers are allowed to use short quotations (up to 50 words, full sentences only) from test reviews that are consistent with the overall test evaluation written by our reviewers. Test publishers must obtain permission from the Buros Center prior to each use.
Previous Reviews
Often an earlier edition of a test has been reviewed in a previous Mental Measurements Yearbook. If you would like to examine these reviews, and are otherwise unable to access them (through EBSCO, Ovid, or the print Mental Measurements Yearbooks), you may request electronic copies from our assistant editor Joel Puchalla at jpuchalla@buros.org
Part 2: Preparing and Submitting Test Reviews
General Format
Reviews should be formatted according to APA style and use a common font (like Aptos, Calibri, or Times New Roman). Begin by indicating the test and forms to be reviewed, using the test title provided by the test publisher, followed by your name, title, and affiliation (e.g., John Doe, Professor of Educational Psychology, University of Maryland, College Park, Maryland). The review itself should follow Buros’s five-section organization sequence (including sections called Description, Development, Technical, Commentary, and Summary, all of which are further explained in the Organization section below). Using the same structure across reviews makes it easier for readers to follow and allows them to make comparisons among different tests they may be considering for use.
Length
Reviews should be concise and approximately 1000 to 1600 words.
Organization
Please organize the review in Buros’s preferred five-section format described below.
Section 1: Description
In this section, a general description of the test is given and usually includes the purposes of the assessment, the target population, and the intended uses of the test. In addition, information about administration of the test should be summarized along with information about the scores and scoring procedures.
Section 2: Development
Information in this section reviews how the test was developed, reviews what underlying assumptions or theory guided the decisions about how to define the construct, and provides details on item development and refinement. Discuss results of pilot testing in this section. In addition, the reviewer might comment on any steps undertaken in the selection of the final set of items for the test and any evaluations of the appropriateness of these items for measuring the construct(s) of interest.
Section 3: Technical
This section can be divided into three subsections: standardization, reliability, and validity evidence. Subheadings may be used but are not required. In addition, test fairness should be addressed by presenting and evaluating evidence offered regarding accessibility, a feature of tests that helps assure all test takers have an unobstructed opportunity to demonstrate their standing on the construct of interest. Fairness considerations reflect efforts to minimize construct-irrelevant variance and apply to all steps in the testing process—test design and development, validation, administration, scoring, and score interpretation. (Note: Treatment of test fairness may occur in the Development and/or Commentary sections, instead of or in addition to the Technical section.)
Standardization
In describing standardization, present information about the standardization, development, and/or norm sample(s), including how well the samples match the intended population. Appropriateness of the norms for all individuals in the target population, including individuals from specific subgroups of test takers, should be discussed. These diverse subgroups include those defined by such dimensions as race, ethnicity, culture, age, gender, as well as socioeconomic and disability status, among other individual characteristics.
Reliability
Regarding reliability, evidence for score consistency is presented. The types of reliability estimates and their magnitudes should be presented in a summary fashion. Brief comments about the acceptability of the levels of reliability, the sample used for determining these estimates, and related issues are pertinent to this section. More extensive treatment of reliability concerns may occur in the Commentary section.
Validity
In discussing validity, present evidence that supports interpretations and potential uses of test results. Studies designed to gather evidence of valid uses of test scores should be summarized. Information about test content and the adequacy of testing measures of the intended construct also should be presented.
- If the test is intended to be used to make classifications or predictions, evidence supporting these uses should be described in this section.
- In addition, reviews should examine the differential validity of the test across subgroups that are included in the intended population (e.g., gender and racial subgroups) and should address differential item functioning if not addressed elsewhere in the review.
- Brief comments about the acceptability of the evidence presented to support test score interpretation and use belong in this section. Extensive commentary concerning validity evidence should occur in the Commentary section.
- Consistent with current measurement standards, a test itself is not deemed “valid.” Rather, validity is appraised in terms of the uses of test scores and how well test results meet the intended purposes of the test.
Section 4: Commentary
This section provides an opportunity for the reviewer to address the overall strengths and weaknesses of the test. The adequacy of the theoretical model supporting test use should be examined, together with the impact of current research evidence on the test's assumptions. The reviewer should indicate the extent to which the evidence presented in the test materials and scholarly research support the use of the test with underrepresented or marginalized groups. If another test should or might be considered for use, that test may be listed, cited, and referenced.
Section 5: Summary
In about six or seven sentences, the reviewer is to offer conclusions about the overall quality of the test and recommendations regarding its use. The summary should be as concise and explicit as possible.
Editorial Changes
The Editors reserve the right to make or request changes in or to reject any review that does not meet the standards of the Mental Measurements Yearbook series.
Electronic Submission
Please send a separate email and attachment for each completed test review. Include a personal message clearly identifying the test review that is attached to the email message. Reviewers are advised to retain an electronic copy of the review. Reviews should be submitted to our assistant editor Joel Puchalla at jpuchalla@buros.org.
More Information
For details about receiving credit for authoring reviews, see our Continuing Education and Continuing Professional Development program.
Read sample reviews.
Review our Reviewer FAQ.
Review our Editorial Policy.