Mental Measurements Yearbook
Menu:

Mental Measurements Yearbook, produced by the Buros Institute, contains fulltext information about and reviews of all English-language standardized tests covering educational skills, personality, vocational aptitude, psychology, and related areas as included in the printed Mental Measurements Yearbooks.

Segments and Years of Coverage
Name   Years of Coverage
MMYB-OV Mental Measurements Yearbook   Semi-Annually
MMYBT Mental Measurements Yearbook and Tests in Print   Semi-Annually

The limit of databases that you can select for a multifile search session is based upon database segments rather than actual databases. The Ovid multifile segment limit is set at 120 to avoid impacting your search sessions. This database includes 2 segments.

This database is updated online semi-annually.

 

Fields
The following list is sorted alphabetically by field alias. Click a field name to see the description and search information.
All Fields in this Database
  Accession Number (AN) Full Text (TX) Purpose (PP)
  Acronym (AC) Level (LE) Restricted Distribution (RD)
  Administration (AD) Manual (MA) Review Indicator (RI)
  All Searchable Fields (AF) Mental Measurements Yearbook (YB) Reviewers (RV)
  Authors (AU) Note (NT) Special Editions (SE)
  Classification Code (CL) Original MMY Test Citation (TY) Scores (SR)
  Comments (CM) Parts (PA) Sublistings from Description (SL)
  Copyright (CP) Population (PO) Subtests from Description (ST)
  Cross References (CR) Price (PR) Test Description (TD)
  Database (DB) Print Status (PS) Time (TM)
  Date of Publication (DP) Publisher (PB) Title (TI)
  Editions (ED) Publisher Name (PN) Update Code (UP)
  Forms (FO)    
Go: Menu or Back 
Default Fields for Unqualified Searches (MP): Searching for a term without specifying a field in Advanced search, or specifying .mp., defaults to the following ‘multi-purpose’ (.mp.) fields for this database: rv,ti,tx.
  Reviewers (RV) Title (TI) Full Text (TX)
Go: Menu or Back 

Default Fields for Display, Print, Email, and Save: The following fields are included by default for each record.

  Accession Number (AN) Level (LE) Review Indicator (RI)
  Acronym (AC) Manual (MA) Reviewers (RV)
  Administration (AD) Mental Measurements Yearbook (YB) Scores (SR)
  Authors (AU) Note (NT) Special Editions (SE)
  Classification Code (CL) Original MMY Test Citation (TY) Sublistings from Description (SL)
  Comments (CM) Population (PO) Subtests from Description (ST)
  Copyright (CP) Price (PR) Test Description (TD)
  Cross References (CR) Publisher (PB) Time (TM)
  Database (DB) Publisher Name (PN) Title (TI)
  Date of Publication (DP) Purpose (PP) Update Code (UP)
  Full Text (TX) Restricted Distribution (RD)  
       
Go: Menu or Back 

All Fields for Display, Print, Email, and Save: Use the Select Fields button in the Results Manager at the bottom of the Main Search Page to choose the fields for a record.

  Accession Number (AN) Level (LE) Review Indicator (RI)
  Acronym (AC) Manual (MA) Reviewers (RV)
  Administration (AD) Mental Measurements Yearbook (YB) Scores (SR)
  Authors (AU) Note (NT) Special Editions (SE)
  Classification Code (CL) Original MMY Test Citation (TY) Sublistings from Description (SL)
  Comments (CM) Population (PO) Subtests from Description (ST)
  Copyright (CP) Price (PR) Test Description (TD)
  Cross References (CR) Publisher (PB) Time (TM)
  Database (DB) Publisher Name (PN) Title (TI)
  Date of Publication (DP) Purpose (PP) Update Code (UP)
  Full Text (TX) Restricted Distribution (RD)  
Go: Menu or Back 
The following list is sorted alphabetically by the two-letter label, and includes the relevant alias, at least one example for all searchable fields, and a description of the field.
Label Name / Example
AC Acronym [Word Indexed]
era.ac.
math.ac.
 

The Acronym (AC) field contains the acronym of the test described in the record.

Acronyms are only listed if the author or publisher has made substantial use of the acronym in referring to the test, or if the test is widely known by the acronym.

Back 
AD Administration [Word Indexed]
primary.ad.
test.ad.
 

The Admininstration (AD) field contains the brief statement of mode of administration of test.

Typically will be "Group" or "Individual" or maybe "Individual or Group."

Back 
AF All Searchable Fields [Search Alias]
Systematic.af.
 

All Fields (AF) is an alias for all of the fields which occur in the source documents, including value-added fields such as Title (TI).

Back 
AN Accession Number [Phrase Indexed]
"09039178".an.
"007002678".an.
 

The Accession Number (AN) field contains a unique number assigned to each test for identification.

For instance, in the current MMY data, the accession number is an 8-digit number. However, in the archived files, the accession number is in the format 9:478 (this would indicate that the review was test #478 in the 9th MMY). To search the index for archived files, exclude the colon and replace it with space, such as 9 478.an.

Back 
AU Authors [Phrase Indexed]
act inc.au.
ahlgren andrew.au.
 

The Authors (AU) field identifies the author(s) of the test described in the record. Authors include individuals and organizations.

For individual authors, names display as last name, first name, and middle initial.

Back 
CL Classification Code [Word Indexed]
sensory.cl
aptitude.cl.
 

The Classification Code (CL) field indicate categories for this test.

Classification Code Categories:

Achievement Handwriting
Developmental Learning Disabilities
Education Philosophy and Religion
English and Language Psychology
Fine Arts Record and Report Forms
Foreign Languages Socio-Economic Status
Intelligence and General Aptitude Test Programs
Mathematics Multi-Aptitude Batteries
unknown Neuropsychological
General Miscellaneous Personality
Adjustment/Adaptive Functioning Reading
Alcohol and Substance Use Science
Blind Sensory-Motor
Business Education and Relationships Social Studies
Criminal Justice and Forensic Speech and Hearing
Driving and Safety Vocations
Family and Relationships Behavior Assessment
Health and Physical Education  

The CL field is word indexed.

The CL field is available for MMYBT and MMYB.

Back 
CM Comments [Word Indexed]
"10".cm.
"1024x768".cm.
 

The Comments (CM) field is a reproduction of the field in the test description including extra comments about the test.

It will be one paragraph and may include two or three sentences, sometimes separated by semicolons.

Back 
CP Copyright [Word Indexed]
regents.cp.
"2011".cp.
 

The Copyright (CP) field contains the copyright information associated with an article.

Back 
CR Cross References [Word Indexed]
"10".cr.
"10114".cr.
 

The Cross References (CR) field is a reproduction of the field in the test description listing editions of the MMY and sometimes TIP in which this test or an earlier version of this test was included.

The CR field is available for MMYBT and MMYB.

Back 
DB Database [Phrase Indexed]
mental measurements yearbook.db.
mental measurements yearbook and tests in print.db.
 

The Database (DB) field indicates the database from which the original record was retrieved.

Back 
DP Date of Publication [Phrase Indexed]
1929 1993.dp.
1911.dp.
 

The Date of Publication (DP) field contains the date(s) of publication of the test described in the record. Dates display as four-digit years (1991) and ranges of years (1992-1993 or 1980-82).

Back 
ED Editions [Word Indexed]
 

The Editions (ED) field contains the Editions for the test.

The ED field is available for MMYBT.

Back 
FO Forms [Word Indexed]
 

The Forms (FO) field contains the Forms for the test.

The FO field is available for MMYBT.

Back 
LE Level [Word Indexed]
280.le.
2sv.le.
 

The Level (LE) field contains the Levels for the test.

Usually this is age/grade levels.

Back 
MA Manual [Word Indexed]
aprovechamiento.ma.
aptitude.ma.
 

The Manual (MA) field contains the current test(s), publication date(s), and number of pages in manual.

Back 
NT Note [Word Indexed]
arts.nt.
"1982".nt.
 

The Note (NT) field contains additional information about the document.

The NT field is available for MMYBT and MMYB.

Back 
PA Parts [Word Indexed]
 

The Parts (PA) field contains the Parts for the test.

The PA field is available for MMYBT.

Back 
PB Publisher [Word Indexed]
abinger.pb.
0.pb.
 

The Publisher (PB) field contains the publisher and publisher's address for the test described in the record.

The PB field is word indexed.

The PB field is available for MMYBT.

Back 
PN Publisher Name [Word Indexed]
arbor.pn.
andrews.pn.
 

The Publisher Name (PN) field contains the publisher name for the test described in the record.

The PN field is word indexed.

The PN field is available for MMYB.

Back 
PO Population [Word Indexed]
"14".po.
12th.po.
 

The Population (PO) field is the stated population for whom the test is intended (as provided by author/publisher).

Back 
PP Purpose [Word Indexed]
"10".pp.
abreast.pp.
 

The Purpose (PP) field is often a quote from test materials.

Back 
PR Price [Word Indexed]
"000".pr.
"114".pr.
 

The Price (PR) field contains the price data with description of components and price for each.

Back 
PS Print Status [Phrase Indexed]
 

The Print Status (PS) field help users know whether the test that is being described is in print.

There are three options for this field:

0 = the test is in print.
1 = the test is out of print.
2 = the test description has been archived, which could mean one of two things:
(1) the test has been superseded by a newer edition of the same test, or
(2) the description/reviews appeared in one of the earliest yearbooks (vols. 1-8) that were published in New Jersey by Oscar Buros himself and we never had digital copies of the reviews.

Instead, those review records have been created by OCR scanning of the hard copy volumes of the yearbooks.

The PS field is available for MMYBT.

Back 
RD Restricted Distribution [Word Indexed]
cerad.rd.
benchmarks.rd.
 

The Restricted Distribution (RD) field indicates if special restrictions are placed on purchase/use of this test.

The RD field is available for MMYBT and MMYB.

Back 
RI Review Indicator [Phrase Indexed]
no review available.ri.
1 reviews available.ri.
  The Review Indicator (RI) field indicates the number of reviews available in the full text of the record.

The RI field is available for MMYB and MMYBT.

Back 
RV Reviewers [Word Indexed]
air.rv.
adelaide.rv.
 

The Reviewers (RV) field contains the names of the reviewer(s) whose review is included in the full text of the record.

Reviewer's names display as last name, first name, and middle initial.

The RV field is word indexed.

The RV field is available for MMYB and MMYBT.
Back 
SE Special Edition [Word Indexed]
ages.se.
aepsi.se.
 

The Special Edition (SE) field indicates if special editions of this test are available it may say "Foreign Language Editions" or it may say "Special Editions" and list large print, Braille, etc.

The SE field is available for MMYBT and MMYB.

Back 
SL Sublistings from Description [Word Indexed]
128.sl.
10th.sl.
 

The Sublistings from Description (SL) field is reproduced from the test description. It may be a few lines long or many line long, with hard return between lines.

The SL field is available for MMYBT and MMYB.

Back 
SR Scores [Word Indexed]
"20".sr.
3warmth.sr.
 

The Scores (SR) field contains information on scoring mechanisms for the test described in the record. The information provided is very detailed and can help you determine what the test actually measures.

Scoring is included for the test as a whole and for each part of the test, if appropriate.
Back 
ST Subtests from Description [Word Indexed]
biz.st.
beck.st.
 

The Subtests from Description (ST) field oftentimes the subtests are listed as sublistings and so this field would be blank. And sometimes there are no subtests at all.

The ST field is available for MMYBT and MMYB.

Back 
TD Test Description [Word Indexed]
1050.td.
1000.td.
 

The Test Description (TD) field is the most current description of the test.

It precedes the reviews in the MMY and is the complete description in TIP.

The TD field is available for MMYBT and MMYB.

Back 
TI Title [Word Indexed]
"18".ti.
"14".ti.
 

The Title (TI) field contains the full name of the test described in the record.

Back 
TM Time [Word Indexed]
autobiographical.tm.
 

The Time (TM) field contains the administration time provided by the Author/Publisher.

Back 
TX Full Text [Word Indexed]
001698620304700205.tx.
  The Full Text (TX) field contains the full text of the record, which is organized into named sections such as Purpose, Population, Price, Administration, Scores, and Time.
Back 
TY Original MMY Test Citation [Word Indexed]
09049142.ty.
09049141.ty.
  The Original MMY Test Citation (TY) field contains the descriptive test information from the original Mental Measurements Yearbook citation.
Back 
UP Update Code [Phrase Indexed]
20110800.up.
 

The Update Code (UP) field displays in all records and contains the date the record was released into the database. It consists of eight digits, in YYYYMMDD format, where YYYY is the release year, MM is the release month, and DD is the approximate release day.

Back 
YB

Mental Measurements Yearbook [Word Indexed]
seventh.yb.
1985.yb.

 

The Mental Measurements Yearbook (YB) field indicates the number of the Mental Measurements Yearbook in which the test was originally reviewed.

Note: This database includes all reviews published in the Mental Measurements Yearbook print series from 1938 through the present.

The YB field is available for MMYB and MMYBT.

Go: Menu or Back 

 

Advanced Searching
You can use special search syntax listed below to combine search terms or strategically develop a search. Full documentation is provided in the Advanced Searching Techniques section of the Online Help.
Operator Syntax Search Example  
OR x or y mind or body

 

 

The OR operator retrieves records that contain any or all of the search terms. For example, the search heart attack or myocardial infarction retrieves results that contain the terms heart attack, myocardial infarction or both terms; results are all inclusive. You can use the OR operator in both unqualified searches and searches applied to a specific field.
AND x and y mind and body

 

 

The AND operator retrieves only those records that include all of the search terms. For example, the search blood pressure and stroke retrieves results that contain the term blood pressure and the term stroke together in the same record; results are exclusive of records that do not contain both of these terms. You can use the AND operator in both unqualified searches and searches applied to a specific field.
NOT x not y light not dark

 

 

The NOT operator retrieves records that contain the first search term and excludes the second search term. For example, the search health reform not health maintenance organizations retrieves only those records that contain the term health reform but excludes the term health maintenance organizations. In this way, you can use the NOT operator to restrict results to a specific topic.
You can use the NOT operator in both unqualified searches and searches applied to a specific field.
Adjacency (ADJ) x y mind adj body

 

 

The Adjacent operator (ADJ) retrieves records with search terms next to each other in that specific order. You do not need to separate search terms manually by inserting ADJ between them, because when you separate terms with a space on the command line, Ovid automatically searches for the terms adjacent to one another. For example, the search blood pressure is identical to the search blood adj pressure.
Defined Adjacency (ADJn) x ADJn y therapy adj3 animal

 

 

The defined adjacency operator (ADJn) retrieves records that contain search terms within a specified number (n-1) of words from each other in any order (stop-words included). To use the adjacency operator, separate your search terms with ADJ and a number from 1 to 99 as explained below:

           ADJ1     Next to each other, in any order
           ADJ2     Next to each other, in any order, up to 1 word in between
           ADJ3     Next to each other, in any order, up to 2 words in between
           ADJ99   Next to each other, in any order, up to 98 words in between

For example, the search physician adj5 relationship retrieves records that contain the words physician and relationship with a maximum of four words in between in either direction. This particular search retrieves records containing such phrases as physician patient relationship, patient physician relationship, or relationship between cancer patient and physician.
Please note Ovid’s order of operation handles terms within parentheses first. Therefore it is recommended to apply the ADJn operator in one-on-one operations to avoid missing out on results. E.g. stroke adj4 (blood pressure or high blood pressure) could potentially miss out on some combinations of stroke with high blood pressure. The optimum way to execute this on Ovid is: (stroke adj4 blood pressure) OR (stroke adj4 high blood pressure).
Frequency (FREQ) x.ab./FREQ=n 2003.nt. /freq=1

 

 

The frequency operator (FREQ) lets you specify a threshold of occurrence of a term in the records retrieved from your search. Records containing your search term are retrieved only if the term occurs at least the specified (n) number of times. In general, records that contain many instances of your search term are more relevant than records that contain fewer instances. The frequency operator is particularly useful when searching a text field, such as Abstract or Full Text, for a common word or phrase.
Unlimited Truncation ($) x$

rat$

 

 

Unlimited truncation retrieves all possible suffix variations of the root word indicated. To apply unlimited truncation to a term, type the root word or phrase followed by either of the truncation characters: $ (dollar sign) or * (asterisk). For example, in the truncated search rat*, Ovid retrieves the word rat as well as the words rats, and more.
Limited Truncation ($) x$n

dog$1

 

 

Limited truncation specifies a maximum number of characters that may follow the root word or phrase. For example, the truncated search dog$1 retrieves results with the words dog and dogs; but it does not retrieve results with the word dogma.
Mandated Wildcard (#) xx#y

wom#n

 

 

Searching with a mandated wildcard retrieves all possible variations of a word in which the wildcard is present in the specified place. You can use it at the end of a term to limit results to only those that contain the word plus the mandated character. For example, the search dog# retrieves results that contain the word dogs, but not those that contain the word dog, effectively limiting results to only those that contain the plural form of the word. The mandated wild card character (#) is also useful for retrieving specialized plural forms of a word. For example, the search wom#n retrieves results that contain both woman and women. You can use multiple wild cards in a single query word.
Optional Wildcard (?) xx?y colo?r

 

 

The optional wild card character (?) can be used within or at the end of a search term to substitute for one or no characters. This wild card is useful for retrieving documents with British and American word variants since it specifies that you want retrieval whether or not the extra character is present. For example, the optional wild card search colo?r retrieves results that contain the words color or colour. You can use multiple wild cards in a single query word.
Literal String ("") "x / y"

"Heat / Cold Application"

 

"n"

"3".vo

 

 

Quotation marks can be used to retrieve records that contain literal strings, when the string includes special characters, such as a forward slash (/).

Quotation marks can also be used to retrieve records that contain numbers that may otherwise be confused for earlier searches. In the example, a search for 3.vo would limit the string from your third search in your search history to the volume field. By including the number in quotation marks, the search will retrieve documents with a 3 in the volume number.

Go: Menu or Back 

 

Stopwords

The Ovid search engine applies so called "run-time stopword processing". This means the search engine on the fly ignores the stopwords: and, as, by, for, from, in, is, of, on, that, the, this, to, was, were & with.

Therefore a search: at risk for diabetes.ti will also find: at risk of diabetes. The distance of one word in between is kept, but the stopword "for" is ignored.

Go: Menu or Back 

 

Limits
The following limits are available for this database. See Database Limits in the Ovid Online Help for details on applying limits.

Limit

Syntax
Yearbook Edition Sentence Syntax: limit 1 to 11th edition
  A limit to Yearbook Edition restricts retrieval to documents published in a particular edition of the Mental Measurements Yearbook.
Go: Menu or Back 

 

Tools

Currently no tools are available for this database.

Go: Menu or Back 

 

Changing to this Database
To change a search session to a segment of this database from another database or another segment, use the following syntax in the Ovid Syntax tab:
  Command Syntax: ..c/mmyb
  Sentence Syntax: use mmyb
Go: Menu or Back 

 

Sample Documents
Sample 1 Mental Measurements Yearbook record
Accession Number
  14172045
Classification Code
  Speech-and-Hearing [17]
Database
  Mental Measurements Yearbook
Mental Measurements Yearbook
  The Fourteenth Mental Measurements Yearbook 2001
Title
  Cooper Assessment for Stuttering Syndromes.
Acronym
  CASS-C.
Author
  Cooper, Eugene B.; Cooper, Crystal S.
Purpose
  Identifies and quantifies affective, behavioral, and cognitive components of stuttering syndromes.
Publisher
  The Psychological Corporation, 555 Academic Ct, San Antonio, TX 78204-2498
Publisher Name
  The Psychological Corporation
Date of Publication
 1995
Population
  Children (ages 3-13); Adolescents and Adults (age 13 and over)
Scores
  Chronicity Prediction Checklist, Parent Assessment of Fluency Digest, Teacher Assessment of Fluency 
Digest, Child Fluency Assessment Digest, Speech Situation Avoidance Reactions, Frequency Disfluencies,
Client's Assessment of the Fluency Problem, Clinician's Assessment of the Fluency Problem, Chronicity of
the Fluency Problem. Administration Individual Manual Manual, 1995, 1 page. Price Price data not available. Time [60] minutes. Reviewers Hurford, David P. (Pittsburg State University); Norris, Janet (Louisiana State University). Review Indicator 2 Reviews Available Comments Each CASS package includes both DOS and Windows(R) version; system requirements: DOS-1.5 MB hard
disk space, Windows(R) (3.1 or higher)--1 MB hard disk space. As of February 2001, publisher advises test
is now out of print. Full Text Review of the Cooper Assessment for Stuttering Syndromes by DAVID P. HURFORD, Director of the
Center for the Assessment and Remediation of Reading Difficulties and Professor of Psychology and
Counseling, Pittsburg State University, Pittsburg, KS:

The Cooper Assessment for Stuttering syndromes (CASS) is composed of two protocols, the Cooper
Assessment for Stuttering syndromes: Children's Version (CASS-C), and the Cooper Assessment for
Stuttering Syndromes: Adolescent and Adult Version (CASS-A). With regard to assessing stuttering
behavior and its related cognitive and affective components, it is prudent to examine children and
adolescents/adults separately. Approximately 75% of children who stutter will not stutter after the
onset of puberty. Those who do, and who have not been assisted by treatment, are quite likely to
stutter for the rest of their lives. One of the facets of the CASS-C is to determine the likelihood of
sustained disfluency in oral language. The CASS-A and the CASS-C are both computer programs designed
to assist the speech-language pathologist (SLP) in organizing important information regarding the
behavioral, cognitive, and affective components of stuttering. The CASS-A/C is a self-contained protocol
on a 3.5-inch diskette. No manual or supporting material accompanies the diskettes.

The CASS-C and CASS-A are not assessment instruments. They are fundamentally organizational protocols.
The results of the CASS provide a summary of the data that were entered by the speech-language
pathologist and derives a mean fluency disorder rating.

The CASS-C included the Chronicity Prediction Checklist, the Parent Fluency Assessment Digest, the
Teacher Fluency Assessment Digest, and the Child Fluency Assessment Digest. From these, a report is
derived that provides a useful compilation of the likelihood of continued stuttering difficulty, the
parent's and teacher's judgment of the severity of the child's disfluency as well as observational
information concerning the child's stuttering behavior, and his or her cognitive and affective response
to stuttering. Last, there is an evaluation of the child's disfluency rate.

The CASS-A includes the Situation Avoidance Reactions, the Feelings and Attitudes Regarding Fluency,
the Disfluency Frequency, Type and Duration, the Disfluency Concomitant Behaviors, the Client
Perceptions of Severity, the Clinician Perceptions of Severity, and the Assessing Chronicity protocols.
These components help to determine the severity of the individual's stuttering behavior, its etiology and
development, as well as determining the nature of the cognitive and affective reactions to stuttering
and the situations in which it occurs.

The strengths of the CASS-A/C are that the protocols include theoretically relevant items regarding not
only the stuttering behavior, but also the secondary issues of the cognitive and affective responses to
stuttering. Therapies have traditionally emphasized remediating the disfluencies rather than considering
the individual's emotional and cognitive responses to stuttering. These can be quite disabling as well.
Including the individual's cognitive and emotional responses to stuttering helps to provide a more
complete understanding of the effect of stuttering on the client. Many of these concerns have grown
out of failure of therapeutic interventions. In such cases, the client leaves the therapy with the
stuttering intact and no support for the ancillary difficulties associated with disfluent speech. Cooper
indicates that these individuals have chronic perseverative stuttering syndrome. Therapies should
address the secondary issues related to stuttering for these individuals and should help them to develop
compensatory mechanisms or other techniques for adjustment. Conversely, there are treatment
programs that boast 93% initial success and 75% sustained success in which clients retain fluent speech
for 1 to 4 years post-treatment (e.g., Hollins Communication Research Institute in Roanoke, Virginia).

The disadvantages of using the CASS-A/C for purposes other than organizing and summarizing are
numerous. No psychometric properties were evaluated or reported for either version of the CASS. Not
only will the results of the CASS be idiosyncratic to the professional doing the evaluation, there is no
guidance to determine how to differentiate between a mild, moderate, severe, or a very severe
problem, which the SLP must determine when administering the CASS. The mean fluency disorder score
is based entirely on the values produced by the SLP, the client, and the parents/teachers. No studies
of reliability or validity were conducted and no norming sample existed to determine the nature of the
derived scores.

The CASS-A/C is relatively simple to use; however, the entire test is computer based with no supporting
material. In fact, there is no manual. As a result, many potential users will find this device seriously
sample were recruited from a large school district in Florida. Letters and questionnaires were randomly
lacking. Some time and effort is required to explore and to become familiar with the various aspects of
the program.

SUMMARY. The CASS-A/C should only be used as an organizational vehicle. The items are appropriate for
fully comprehending the nature of the individual's stuttering difficulty. However, the SLP will need to
determine the meaning of the summary information. There are no norms associated with either version
of the CASS. There is not evidence that individuals with appropriate fluency would respond any
differently to many of the items on the CASS as compared to individuals with disfluency. It should be
emphasized that no studies of reliability or validity were reported. As a result, there is no evidence that
scores from the CASS-A/C are reliable or valid. Using the CASS-A or CASS-C for any purpose other than
organizing clinical interview information is certainly not appropriate or warranted and definitely is not
advised.

Review of the Cooper Assessment for Stuttering Syndromes by JANET NORRIS, Professor of
Communication Disorders, Louisiana State University, Baton Rouge, LA:

The Cooper Assessment for Stuttering Syndromes is a computerized program designed to assist a
practitioner in conducting an evaluation of stuttering. By following the directions in the program, a
fluency protocol can be completed, with the resulting data automatically tabulated and computed for
means and percentage of occurrence. The data then are organized into a report complete with an
analysis of the results, including judgments of severity and prognosis. The results can be printed as a
report complete with a personal heading or letterhead, or stored for later reference.

Two nearly identical versions of the program are available, one for children and one for adolescents and
adults. Both versions begin by entering identifying information about the individual to be assessed,
including name, address, and date of birth. This is followed by six tasks designed to examine common
phenomena associated with stuttering. These include assessments of speech situation avoidance
reactions, feelings and attitudes regarding stuttering, the percentage of disfluencies produced during
differing speech tasks, types and duration of disfluencies exhibited, extraneous and distracting
behaviors produced while speaking, and a perception of the individual's disorder as judged separately by
the individual and the examiner. Additional comments also can be added to the report.

The protocol requires approximately one hour to complete and requires no materials except a reading
passage that can be printed from the program. Most of the protocol is completed by asking the
individual to respond to questions, making choices along a 5-point scale with descriptors such as
"never," "frequently," or "always." Responses are entered directly into the computer using a keyboard
positioned so that the examiner can observe and talk to the individual without having to shift out of
typing position.

The questions presented are those traditionally asked in a fluency evaluation and are fairly
comprehensive. For example, the speech situation avoidance subtest is composed of 50 questions,
ranging from very general queries (i.e., How often do you avoid speaking situations?) to specific (How
often do you avoid eating in restaurants?). Most common avoidance situations are addressed, including
giving directions, giving one's name on the phone, talking to inattentive people, or talking in the
classroom or during job interviews. The children's version includes the questions designed to address
adult situations and vice versa, for which a "not appropriate" entry can be submitted. This is the longest
task on the protocol and it is likely that children will begin to show distractibility or disinterest before
the final item is reached. However, there is no restriction preventing conduction of the interview
across more than one session and this may be an option for children and some adults.

Feelings and attitudes are similarly addressed, but with 25 questions responded to with "agree,"
"undecided," or "disagree." These questions are designed to evaluate self-perception (Fluency problems
are my own fault) and attitudes (Stutterers should not be called on in class). The protocol calls for the
individual's initial reaction, rather than a thoughtful analysis of the statement.

The third section of the protocol assesses actual speech production using recitation, spontaneous
conversation, and reading. Recitation includes repeating the nursery rhyme "Mary had a little lamb," and
the pledge of allegiance, as well as repeating six sentences. As the individual speaks, the examiner clicks
the mouse for each disfluency detected. The program assumes that the examiner is a professional with
knowledge of stuttering, because disfluencies are not defined nor are examples given. A percentage of
the number of disfluently produced words from the total number of words is calculated by the program.
Reading includes oral reading of a 100-word passage and 12 sentences that the individual must complete
(e.g., 5 + 5 = ______).

Finally, a short sample (2-3 minutes) of conversation is obtained and analyzed subjectively by the
examiner for a judgment of the percent of occurrence of disfluencies, while each disfluency heard is
recorded by clicking the mouse. The tasks are most appropriate for older children and adults, especially
the reading passage that appears to be at an upper elementary level. It is surprising that a passage with
a lower readability level was not selected for the children's version.

The fourth section is a checklist of extraneous or secondary behaviors that commonly accompany
disfluent speech. These include facial movements such as wrinkling the forehead or losing eye contact,
body movements such as arm twitches or head jerks, and abnormal breathing patterns such as shallow
breaths or rapid inhalations. Any behaviors observed during the assessment or known to be
characteristic of the individual are to be indicated.

The fifth section is an assessment of the types and duration of dysfluencies. Type is defined by such
characteristics as insertion of sounds, words, or phrases, episodes of silent prolongation or vocalized
prolongation, or sound, word, or phrase substitutions while speaking. Vocal quality, such as unusual
pitch, intensity, rate, or prosody also is indicated. Duration of the average disfluent moment is measured
along a scale ranging from fleeting to greater than 4 seconds, with specification of the longest duration
produced during the evaluation.

The final section measures the individual's and the examiner's perception of the disfluency. The
individual provides perceptions of the frequency of their own disfluency, as well as how often they
consider stuttering to be a problem, and how great a problem it is in daily life. The examiner answers
similar questions as well as judgments of the individual's perception of the problem compared to the
actual problem as measured by the protocol. The examination ends with a checklist of 10 yes/no
questions regarding common developmental and behavioral characteristics of stuttering. This includes
whether the problem began in early childhood, if periods of normal fluency and control are
experienced, and if the problem has persisted 10 years or more.

The results of the assessment, including tables displaying the results of the different subsections
complete with totals, means, and percentages are available nearly simultaneously with the final click of
the mouse. The program also provides a syndrome indication, such as "remediable stutter syndrome" and
a severity rating, such as "mild" or "severe."

The Cooper protocol is based on traditional methods of assessing the observable products of stuttering,
treated as a series of separate "stutter events," as well as affective, behavioral, environmental, and
cognitive components of the syndrome. No manual is provided that describes the theoretical frame for
the components of the protocol nor explains how items were selected. However, Cooper and Cooper's
(1985) integrated view of stuttering as a complex disorder influenced by multiple factors is well
established in the literature. The questions and measures are based on decades of research that have
explored the areas assessed in this protocol. It would be useful to have reliability data, such as test-
retest performance for a variety of individuals exhibiting a range of stuttering severity, to assess the
accuracy of this computerized method for gathering data. For example, the mouse clicks that are
produced as the person speaks could be distracting or could add tension resulting in greater
disfluencies, especially during the first administration when the procedure is unfamiliar. Validity data are
also lacking. A determination of whether the severity rating and syndrome indication resulting from the
protocol are consistent with actual severity and prognosis would be an important measure of construct
validity.

SUMMARY. The Cooper Assessment for Stuttering Syndromes provides an assessment of a variety of
factors related to stuttering, but each is treated as a separate component. The structure of this
instrument thus limits the true interaction among the factors that cause and maintain stuttering. The
Cooper can be used to provide a descriptive assessment of some of the products of stuttering, and it
does this very efficiently. But this is only a beginning point for assessing and treating stuttering. To
conduct an adequate assessment of the syndrome, the examiner must look beyond the surface to view
the complex and dynamic interactions that occur among these and many other factors. This would
include an analysis of the factors that limit or increase disruptions in fluency during conversations that
differ on factors that limit or increase disruptions in fluency during conversations that differ on factors
such as topic of discussion and speaking partners, and consideration of the dynamics between multiple
factors simultaneously impacting speech (Smith & Kelly, 1997).

REVIEWER'S REFERENCES

Cooper, E. B., & Cooper, C. S. (1985). Cooper Personalized Fluency Control Therapy--Revised. Allen, TX:
DLM Teaching Resources.

Smith, A., & Kelly, E. (1997). Stuttering: A dynamic, multifactorial model. In R. F. Curlee & G. M. Siegel
(Eds.), Nature and treatment of stuttering: New directions (2nd ed.). Boston: Allyn & Bacon.

Copyright Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing.
All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute,
Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the
University of Nebraska and may not be used without express written consent.

Update Code 20110800
Sample 2 Mental Measurements Yearbook and Test In Print record  
Accession Number
  TIP07002841
Review Accession Number
  15022608
Classification Code
  Developmental [02]
Database
  Mental Measurements Yearbook and Tests In Print
Mental Measurements Yearbook
  The Fifteenth Mental Measurements Yearbook 2003
Title
  Young Children's Achievement Test
Acronym
  YCAT
Author
  Hresko, Wayne P.; Peak, Pamela K.; Herron, Shelley R.; Bridges, Deanna L.
Purpose
  Designed to help determine early academic abilities.
Publisher
  PRO-ED, 8700 Shoal Creek Blvd., Austin, TX 78757-6897; Telephone: 800-897-3202; FAX: 800-397-7633; E-
mail: [email protected]; Web: http://www.proedinc.com Publisher Name PRO-ED Publisher ID Number PRO24262 Date of Publication 2000 Population Ages 4-0 to 7-11 Scores General Information, Reading, Mathematics, Writing, Spoken Language, Early Achievement Composite Administration Individual Price 2011: $240 per complete kit including examiner's manual (154 pages), picture book (48 pages), 25 student
response forms, and 25 profile/examiner record booklets; $75 per examiner's manual; $81 per picture
book; $35 per 25 student response forms; $59 per 25 profile/examiner record booklets. Cross References For reviews by Russell N. Carney and Susan J. Maller, see 15:285. Time (25-45) minutes. Test Description Young Children's Achievement Test. Purpose: Designed to help determine early academic abilities.
Population: Ages 4-0 to 7-11. Publication Date: 2000. Acronym: YCAT. Scores, 6: General Information,
Reading, Mathematics, Writing, Spoken Language, and Early Achievement Composite. Administration:
Individual. Price Data, 2011: $240 per complete kit including examiner's manual (154 pages), picture book
(48 pages), 25 student response forms, and 25 profile/examiner record booklets; $75 per examiner's
manual; $81 per picture book; $35 per 25 student response forms; $59 per 25 profile/examiner
record booklets. Time: (25-45) minutes. Authors: Wayne P. Hresko, Pamela K. Peak, Shelley R. Herron, and
Deanna L. Bridges. Publisher: PRO-ED. Cross References: For reviews by Russell N. Carney and Susan J.
Maller, see 15:285. Full Text Review of the Young Children's Achievement Test by RUSSELL N. CARNEY, Professor of Psychology,
Southwest Missouri State University, Springfield, MO:

DESCRIPTION. The Young Children's Achievement Test (YCAT) is an individually administered test designed
to measure early academic abilities. Its central purpose is to aid in the identification of young children
at risk for school failure. YCAT materials include the examiner's manual, a picture book, profile/examiner
record booklets, and student response forms. Designed for English-speaking preschoolers,
kindergartners, and first-graders (ages 4-0 through 7-11), the five subtests of the YCAT measure General
Information, Reading, Mathematics, Writing, and Spoken Language. The five resultant scores can be
combined to yield an Early Achievement Composite score. The YCAT can also be used to document
educational progress.

The YCAT is nontimed, and the subtests can be administered in any order. Approximately 25 to 45
minutes are required for administration. An easel with color pictures (the picture book) allows for the
presentation of many of the test items. The profile/examiner record booklet lists each question, and
also provides one or more examples of the correct response as an aid to scoring. Items are scored as
either correct (1) or incorrect (0). The examiner begins with the first item on each subtest, and
continues until a ceiling is reached. The subtests are each about 20 items in length, except for the
spoken language subtest, which is composed of 36 items.

Types of scores provided include raw scores, age equivalents, percentiles, and standard scores. In
particular, standard scores are normalized and are based on a mean of 100 and a standard deviation of
15. Children with standard composite scores below 90 are considered at risk. Additionally, a table in the
manual allows the user to convert the scores to normal curve equivalents (NCEs), z-scores, T-scores, and
stanines.

DEVELOPMENT. The YCAT was developed to help identify young children at risk for academic failure. The
test manual authors list eight reasons why such testing should take place. Central to their argument is
the notion that it is best to identify academic problems early on in order to provide interventions that
have the greatest likelihood of success. The authors acknowledge that early achievement is based on
the interaction of several factors, such as physical/psychological well-being, the child's environmental
experiences, informal and formal instruction, and finally, the child's intrinsic curiosity and motivation.

In developing this instrument, a large number of early childhood tests and curriculum materials were
reviewed, which are listed in the manual. Based on this review, 183 test items were initially generated,
of which 117 were eventually chosen for the YCAT.

Test developers conducted traditional item analysis by examining difficulty and item discrimination (using
an item/total-score Pearson correlation index). Median percentages of item difficulty and discrimination
are reported across age ranges and across the five subtests. The values provided appear to be
acceptable.

Two differential item functioning (DIF) techniques were used as a screen to detect item bias: the
logistic regression approach and the Delta Plot approach. Four groups were compared: male vs. female,
European Americans vs. non-European Americans, African Americans vs. non-African Americans, and
Hispanic Americans vs. non-Hispanic Americans. The logistic regression approach yielded a relatively
small number of potentially biased items per group. These items were examined, and the authors did not
feel that they were particularly biased. The Delta Plot approach suggested little if any bias in the items
for the four comparison groups.

TECHNICAL.

Standardization. Normative data are based on 1,224 children sampled from 32 states (1996-1999). This
sample was designed to be representative of the nation. The manual provides a table listing the
demographic characteristics of the sample next to census figures for these areas: geographic region,
gender, race, residence, ethnicity, family income, parents' educational attainment, and disability status.
The sample percentages, and those of the census, seem quite comparable.

Reliability. Internal consistency was estimated using Cronbach's (1951) coefficient alpha. Values across
the subtests, and at different ages, ranged from .74 (General Information, age 7) to .92 (Reading, age 4).
The majority of the subtest values were in the mid- to high .80s. The Early Achievement Composite score
yielded high internal consistency values: .95 to .97. The authors went on to calculate alphas for nine
subgroups (e.g., males, females, African Americans, those classified as learning disabled, those classified
as mentally retarded). Nearly all of these values were in the .90s for the various subtests, and ranged
from .97 to .99 for the composite score-very high values. The authors argue that this suggests that the
YCAT is "about equally reliable" (examiner's manual, p. 57) for the different subgroups examined.

Test-retest reliability (approximately 2-week interval) was calculated using a sample of 190 children from
two different schools. The five subtests had high test-retest reliability, ranging from .97 to .99. The
correlations were corrected for restriction of the range where appropriate. No value was reported for
the composite score.

Interscorer reliability was estimated as follows. Two graduate students independently scored 100
completed protocols. The correlations between their scores on the five subtests ranged from .97 to .99,
indicating a high degree of agreement. Again, no value was reported for the composite score.

VALIDITY.

Content validity. As described in the Development section, to build content validity into their test, the
authors reviewed a number of early childhood tests and curriculum materials in order to produce test
items.

Criterion-related validity. Concurrent validity was measured by correlating performance on the YCAT
with a variety of other tests, including the Comprehensive Scales of Student Abilities (1994), the Kaufman
Survey of Early Academic and Language Skills (1993), the Metropolitan Readiness Tests (1995), and the
Gates-MacGinitie Reading Tests (1989). The numerous resultant correlations are listed in the manual, and
for the most part, provide evidence for concurrent validity.

Construct validity. The manual discusses six premises related to construct validation: age differentiation,
group differentiation, the YCAT's relationship to academic ability, the YCAT's relationship to intelligence,
subtest interrelationships, and item validity. In the first instance, as expected, evidence is provided that
the YCAT does indeed differentiate between students on the basis of age. Older students make higher
scores than younger students on the various subtests. Second, group means for Whites, Blacks,
Hispanics, and ADHD individuals were virtually identical, whereas means for those with learning
disabilities and mental retardation were lower-as one would expect if the test was working correctly.
Third, as described earlier under concurrent validity, the YCAT correlates with other achievement
tests. Likewise, the YCAT correlates to some extent with the Slosson Intelligence Test for Children and
Adults (1990). Correlations with the Slosson for the five YCAT subtests ranged from .44 to .73, and the
correlation was .68 for the YCAT composite score. Fifth, the five subtests of the YCAT were
intercorrelated, with values ranging from .57 to .71. This suggests that they tap into the same construct:
academic abilities. Finally, the items of a subtest should correlate with the total score on that subtest.
Item discrimination values were discussed in the test development section. The presence of good item
discrimination is another piece of evidence supporting construct validity.

COMMENTARY. The test manual for the YCAT describes it as a "quick, reliable, and valid instrument to
help determine early academic abilities" (examiner's manual, p. 3), and I am inclined to agree.
Administration is relatively simple, and is facilitated by way of a colorful easel display format, as well as
an easy-to-follow record booklet. Scoring instructions are clear (as evidenced by high interscorer
reliability), and a variety of scores are reported, including percentiles and standard scores. Even though
age-equivalent scores are provided, the authors of the manual rightly caution against their use.

Reliability appears to be a strength. Based on coefficient alpha reliabilities, the standard error of
measurement (SEM) was only +/-3 points across the board on the composite score. On the subtests, the
SEM ranged from +/-4 to 8 points, with the majority of the values being +/-5 or 6 points. I was especially
impressed with the test-retest reliabilities (2-week interval), which ranged from .97 to .99. It is important
to note that these were corrected for restriction of the range where appropriate. It would have been
interesting to have seen the values prior to the corrections. Also, I was curious as to why no test-retest
reliability was calculated for the composite score.

Likewise, validity evidence was provided for content, concurrent, and construct validity. The evidence
for all three types was convincing. I was particularly taken by the efforts made to reduce bias, which
seem to have been successful. For example, the mean scores on the five subtests, and the composite,
are nearly identical for Anglo-European Americans, African Americans, and Hispanic Americans.
Eliminating potential bias was clearly a priority during the development of this test.

Because the YCAT is designed to predict school failure, it would be worthwhile to have an estimate of
the instrument's predictive validity by testing a sample of young children, and then correlating their
scores with their actual school performance after 1 or 2 years. Such information was probably
unavailable when the manual was published. However, it could be included when the manual is next
revised.

Finally, the test manual seems to be particularly well-written. Beyond the usual chapters dealing with
administration, scoring, norms, reliability, and validity, the manual includes informative sections titled
"Information to Consider Before Testing," "Interpreting the YCAT Results," "Controlling for Test Bias," and
"Other Important Factors Relating to Testing Children's Early Development." Throughout, the manual
often refers to the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1985)-and a
variety of measurement references-providing useful information and cautionary notes where
appropriate. Also very important, the manual stresses that the assessment of young children should
involve information from a variety of sources-not just the scores from a single test. Further, the manual
advises care in interpreting scores when the test has been administered to children who speak
nonstandard English, or who are bilingual.

SUMMARY. The Young Children's Achievement Test (YCAT) is an individually administered test of
academic abilities for English-speaking children ages 4-0 to 7-11 years. Particular attention was paid to
the Standards (AERA, APA, & NCME, 1985) in its development, and the result appears to be a technically
sound test that is easy to administer and score. It should be a useful instrument for the purposes of
identifying young children at risk for school failure, and for measuring academic progress.

REVIEWER'S REFERENCE

American Educational Research Association, American Psychological Association, & National Council on
Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC:
American Psychological Association, Inc.

Review of the Young Children's Achievement Test by SUSAN J. MALLER, Associate Professor of
Educational Studies, Purdue University, West Lafayette, IN:

DESCRIPTION. The Young Children's Achievement Test (YCAT) is an individually administered test that
measures academic skills, including General fund of Information, Reading, Mathematics, Writing, and
Spoken Language, necessary for children in preschool through first grade. The manual states that the
YCAT can be used (a) to identify whether a child's academic skills are developing normally, (b) to
document progress, (c) along with other measures, and (d) in research.

The test materials include the examiner's manual, picture book (in the form of an easel), 25 student
response forms, and 25 profile/examiner record booklets. Other materials needed include pencils,
erasers, 1 nickel, 2 dimes, and 6 pennies. The artwork for the items in the picture book is clear and
attractive. It should be noted that the technical manual is written very clearly. The authors did a good
job of explaining the technical aspects of the test.

The YCAT can be used for children ages 4-0 through 7-11. The manual states that the test is appropriate
for children who can understand the directions and who can "read, write, and speak English" (p. 7).
However, it appears that the test also is appropriate for children at the younger ages who have
prereading skills.

The YCAT should be administered by a person who has had formal training in administering and
interpreting results from assessments. The manual cites the guidelines suggested by Anastasi and Urbina
(1997) and also recommends that the examiner have had supervised training in using the screening tests
and have practiced administering the YCAT several times with colleagues.

YCAT administration requires approximately 25-45 minutes, although the test is not timed. Breaks are
allowed for children who cannot sit through the entire testing period, especially younger children.

All items are scored as correct or incorrect. All subtests have no basals and have a ceiling level of three
consecutive wrong items. The examiner's manual contains clearly presented scripts for each item on
each subtest. It also includes prompts and scoring criteria for each item. Item scores are summed to
obtain subtest raw scores. Tables are provided for converting the raw scores to age equivalents
(although the authors discourage the use of age equivalents), percentile ranks, and age-based quotients
(standard scores; M = 100, SD = 15). The manual does not explain why 3-month intervals are used for all
quotients except for the General Information quotients, which are based on 6-month intervals. In
addition, the summed subtest standard scores also are converted to an Early Achievement Composite
(EAC). A table is provided for converting quotients to a variety of other types of standard scores,
including Normal Curve Equivalents, z-, T-, and standing scores. Scoring information is recorded on the
YCAT Profile/Examiner Record Booklet. The record booklet also includes sections for recording
identifying information (name, gender, school, grade, age, etc.) and for drawing the profile of scores. A
table is provided for interpreting EAC quotients (standard scores), which includes labels from Very Poor
to Very Superior with percentages of examinees falling within each label range. The manual also includes
an entire section that points out the flaws and discourages the use of age-equivalent; yet, in somewhat
of a contradiction, the record book includes these scores as an option for reporting test results. Tables
also are provided for determining the statistical significance of discrepancies between YCAT subtests.
Standard errors of measurement are presented along with reliability coefficients in a later section in the
manual.

DEVELOPMENT. The manual summarizes the literature supporting the rationale concerning the
assumptions and theory that underlie the YCAT. The five subtests include the following:

General Information. This subtest measures a child's general fund of information, based on an
understanding of concepts and common knowledge (e.g., body parts, colors, categorization, personal
data).

Reading. This subtest measures the understanding of symbols and print conventions (e.g., letter
identification and sounds, reading words, and reading comprehension).

Mathematics. This subtest tests math concepts, including number identification, counting, math
calculation, math problem solving.

Writing. This subtest measures the child's ability to copy and to write letters and simple words and
sentences.

Spoken Language. This subtest measures language skills, including expressive and receptive vocabulary,
communication, phonological awareness, etc.

Items were developed after consulting the research and numerous existing tests of related constructs,
including tests and screening instruments to measure General Information (intelligence tests, basic
concepts tests, and developmental scales), Reading, Mathematics, Writing, and Spoken Language.
The rationale for selecting specific instruments was not stated. After finding certain "item types" that were
commonly found across instruments, the authors reviewed relevant published curricula, resulting in the
development of 183 items, with 117 in the final version. No information was provided concerning the use
of a pilot test or tryout version. The criteria for dropping items from the final version is not stated until
a later section describing the criteria for inclusion based on the results of the conventional item
analysis, which included the use of item difficulty (proportions passing) and item discrimination (item-
total correlation) indices. More justification for using these methods should have been provided; these
indices are flawed because the (a) are highly dependent on the distributions of the given sample and (b)
lack information concerning examinees at various levels of ability. Furthermore, the results are
summarized in a table in the form of "median percentages of difficulty and discrimination" (manual, p.
68). This title is unclear because a percentage of difficulty or discrimination is not a coefficient typically
used. Furthermore, it would have been more informative to report indices for each item, rather than
summary statistics such as medians, because there is no way to determine (a) whether items are
administered in order of difficulty, which is very important, given the use of ceiling rules; and (b) the
distribution of the indices.

TECHNICAL. The standardization sample included 1,224 children from 32 states. The manual provides
detailed information regarding the standardization process. The YCAT was administered by a variety of
professionals, such as certified teachers, psychologists, speech therapists, and professors. Tests also
were administered by trained paraprofessionals as well as graduate students in regular and special
education. No information was provided concerning the level of training of these students or whether
they were supervised. The manual provides data concerning the demographic characteristics and its
representativeness of the 1997 Bureau of the Census in terms of geographical area, gender, race, urban
or rural residence, ethnicity, family income, parental education level, and disability status. Raw scores
were converted to age-based normalized standard scores, and distributions were smoothed.

The manual consistently uses language claiming that the YCAT "is reliable" or "is valid," even though this
type of language is discouraged for current measurement practice. Furthermore, the manual quotes the
following from Linn and Gronlund (1995, p. 49) on page 61, "No test is valid for all purposes. Assessment
results are never just valid; they have a different degree of validity for each particular interpretation to
be made."

A significant portion of the manual is spent explaining and justifying the methods used in the reliability
and validity studies. To their credit, the authors do a fairly good job of backing up their methodology
with citations from the measurement literature. Coefficient alphas for the five subtests are reported
separately for age groups (ages 4, 5, 6, and 7) and averaged to provide a summary statistic across age
groups. Average alphas ranged from .80 to .89 for the subtests and the alpha was .96 for the EAC.
Standard errors of measurement also are reported. In addition, coefficient alphas are reported for
ethnic, gender, and disability subgroups. Test-retest reliabilities (2-week administration interval) were
based on a total of 190 children from two schools. These coefficients ranged from .97 to .99. No
explanation is offered for these seemingly inflated coefficients. Interscorer reliability coefficients were
based on the scores of 100 protocols completed by two advanced graduate students. These coefficients
ranged from .97 to .99. Certainly, the results of two scorers is not convincing evidence of reliability
across random scorers. The various reliability evidence also is presented in a summary table. However, it
should be noted that this table appears to contain a major error. That is, the column that is supposed
to report the subtest average alphas across age groups (which also is presented in the table with the
alphas) actually reports the EAC alphas for the individual age groups.

Validity. A variety of validity evidence is reported. As already mentioned, attempts to ensure content
validity were made by thoroughly reviewing the literature and related tests. Validity evidence included
(a) correlations of YCAT raw scores and age that ranged from .71 to .83, (b) comparison of the means of
contrast groups (although no tests of significance are reported), (c) correlations that ranged from low
to high between YCAT scores and various criterion measures of academic and school-related ability, (d)
correlations between the YCAT and the Slosson Intelligence Test for Children and Adults-Revised that
ranged from .44 to .73, and (e) subtest intercorrelations that ranged from .57 to .71. It should be noted
the rationale for selecting criterion measures was not presented, and thus, the usefulness of the YCAT
for predicting meaningful outcomes (e.g., future academic performance) still is somewhat questionable.
The sample sizes for the criterion-related validity studies ranged from 33 to 75 children. In addition, it
appears that factor analytic studies were not done, as no results are mentioned. Even so, a table in the
manual reports what items are claimed to measure. For example, some of the Reading items claim to
measure "reading comprehension," whereas others claim to measure "reading/listening comprehension,"
"word knowledge and reading comprehension," etc., without empirical evidence to support the validity
of these claims. Thus, practitioners are discouraged from interpreting the results of individual items.
Furthermore, without the evidence to support a one-factor YCAT model, it is difficult to determine
whether there is justification for reporting the EAC, even though the composite is reliable. Finally,
although significance levels are reported for interpreting discrepancies between subtests, evidence
concerning the predictive validity of these discrepancies is not provided.

Differential item functioning (DIF) was investigated using logistic regression and Delta Pilot approaches.
Although the manual lists several different potential methods for investigating DIF, it does not present
the rationale for selecting the two methods used. It should be noted that the Delta Plot approach has
long been considered to be flawed (Bryk, 1980; Camilli & Shepard, 1994) because it is based on
proportions passing, which is highly influenced by the ability distributions of the groups under study.
For this reason, the logistic regression method is preferred, because it controls for ability. The manual
does not state whether the method of purification was used to match examinees of equal ability. A
table reports the numbers of items found to exhibit DIF at the p<.01 level. Although the manual states
that the number of DIF items was well within the level of chance, at this significance level only one item
should be found to exhibit DIF in each analysis. For the groups studied, the number of DIF items ranged
from 2 for Hispanics versus all others to 11 for males versus females. The manual goes on to state that no
items were judged to contain "indications of overt bias" (p. 71), although it does not state how this
was concluded. Admittedly, DIF investigations are expensive and often are not reported for individually
administered tests. The test authors have taken notable initial steps toward investigating DIF; however,
more information is needed regarding the analysis, and justification is needed for the criterion used for
determining DIF. In terms of test bias investigations, no evidence is presented to demonstrate that the
YCAT predicts criterion measures similarly across groups. That is, no studies of differential prediction
were reported.

COMMENTARY AND SUMMARY. The YCAT is an individually administered test of academic skills necessary
for preschoolers through first grade. The YCAT manual is written well and easy to follow. The test
materials are attractive and clearly presented. Relevant literature is cited justifying the underlying
theoretical constructs of the YCAT. Reliability coefficients are impressive. The validity evidence that is
presented is fairly convincing; however, further evidence is needed (a) to determine whether the YCAT
predicts meaningful criterion measures, (b) regarding the internal structure of the YCAT, and (c)
regarding the lack of DIF and test bias in the YCAT. Overall, the first version of the YCAT indicates that it
is likely to be a promising measure of academic abilities in young children, especially when more of the
above-mentioned evidence is obtained regarding the validity of the instrument.

REVIEWER'S REFERENCES

Anastasi, A., & Urbina, S. J. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.

Bryk, A. (1980). [Review of Bias in mental testing]. Journal of Educational Measurement, 17, 369-374.

Camilli, G., & Shepard, L. A. (1994). Methods for identifying biased test items. Thousand Oaks, CA: Sage.

Linn, R. L., & Gronlund, N. E. (1995). Measurement and assessment in teaching (7th ed.). Englewood
Cliffs, NJ: Merrill.

Original MMY Test Citation

[15022608]

Young Children's Achievement Test.

Purpose: Designed to help determine early academic abilities.

Population: Ages 4-0 to 7-11.

Publication Date: 2000.

Acronym: YCAT.

Scores, 6: General Information, Reading, Mathematics, Writing, Spoken Language, and Early Achievement
Composite.

Administration: Individual.

Price Data, 2003: $184 per complete kit including examiner's manual (154 pages), picture book (48 pages),
25 student response forms, and 25 profile/examiner record booklets; $57 per examiner's manual; $66 per
picture book; $25 per 25 student response forms; $41 per 25 profile/examiner record booklets.

Time: (25-45) minutes.

Authors: Wayne P. Hresko, Pamela K. Peak, Shelley R. Herron, and Deanna L. Bridges.

Publisher: PRO-ED.

2 Reviews Available

Developmental [02]

The Fifteenth Mental Measurements Yearbook 2003

Examiner's manual, 2000, 154 pages.

Carney, Russell N. (Southwest Missouri State University); Maller, Susan (Purdue University).

Copyright
Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing.
All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute,
Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the
University of Nebraska and may not be used without express written consent.

Update Code 20110800
Go: Menu or Back 

 

Producer Information
Support and Technical

Buros Center for Testing
University of Nebraska–Lincoln
21 Teachers College Hall
Lincoln, NE 68588-0348
Phone: (402) 472-6203
Fax: (402) 472-6207
Email: [email protected]

Copyright

Copyright, Mental Measurements Yearbook, Buros, 1989 to present. All rights reserved.

Go: Menu or Back