Accession Number
14172045
Classification Code
Speech-and-Hearing [17]
Database
Mental Measurements Yearbook
Mental Measurements Yearbook
The Fourteenth Mental Measurements Yearbook 2001
Title
Cooper Assessment for Stuttering Syndromes.
Acronym
CASS-C.
Author
Cooper, Eugene B.; Cooper, Crystal S.
Purpose
Identifies and quantifies affective, behavioral, and cognitive components of stuttering syndromes.
Publisher
The Psychological Corporation, 555 Academic Ct, San Antonio, TX 78204-2498
Publisher Name
The Psychological Corporation
Date of Publication
1995
Population
Children (ages 3-13); Adolescents and Adults (age 13 and over)
Scores
Chronicity Prediction Checklist, Parent Assessment of Fluency Digest, Teacher Assessment of Fluency Digest, Child Fluency Assessment Digest, Speech Situation Avoidance Reactions, Frequency Disfluencies, Client's Assessment of the Fluency Problem, Clinician's Assessment of the Fluency Problem, Chronicity of the Fluency Problem.
Administration
Individual
Manual
Manual, 1995, 1 page.
Price
Price data not available.
Time
[60] minutes.
Reviewers
Hurford, David P. (Pittsburg State University); Norris, Janet (Louisiana State University).
Review Indicator
2 Reviews Available
Comments
Each CASS package includes both DOS and Windows(R) version; system requirements: DOS-1.5 MB hard disk space, Windows(R) (3.1 or higher)--1 MB hard disk space. As of February 2001, publisher advises test is now out of print.
Full Text
Review of the Cooper Assessment for Stuttering Syndromes by DAVID P. HURFORD, Director of the Center for the Assessment and Remediation of Reading Difficulties and Professor of Psychology and Counseling, Pittsburg State University, Pittsburg, KS:
The Cooper Assessment for Stuttering syndromes (CASS) is composed of two protocols, the Cooper Assessment for Stuttering syndromes: Children's Version (CASS-C), and the Cooper Assessment for Stuttering Syndromes: Adolescent and Adult Version (CASS-A). With regard to assessing stuttering behavior and its related cognitive and affective components, it is prudent to examine children and adolescents/adults separately. Approximately 75% of children who stutter will not stutter after the onset of puberty. Those who do, and who have not been assisted by treatment, are quite likely to stutter for the rest of their lives. One of the facets of the CASS-C is to determine the likelihood of sustained disfluency in oral language. The CASS-A and the CASS-C are both computer programs designed to assist the speech-language pathologist (SLP) in organizing important information regarding the behavioral, cognitive, and affective components of stuttering. The CASS-A/C is a self-contained protocol on a 3.5-inch diskette. No manual or supporting material accompanies the diskettes.
The CASS-C and CASS-A are not assessment instruments. They are fundamentally organizational protocols. The results of the CASS provide a summary of the data that were entered by the speech-language pathologist and derives a mean fluency disorder rating.
The CASS-C included the Chronicity Prediction Checklist, the Parent Fluency Assessment Digest, the Teacher Fluency Assessment Digest, and the Child Fluency Assessment Digest. From these, a report is derived that provides a useful compilation of the likelihood of continued stuttering difficulty, the parent's and teacher's judgment of the severity of the child's disfluency as well as observational information concerning the child's stuttering behavior, and his or her cognitive and affective response to stuttering. Last, there is an evaluation of the child's disfluency rate.
The CASS-A includes the Situation Avoidance Reactions, the Feelings and Attitudes Regarding Fluency, the Disfluency Frequency, Type and Duration, the Disfluency Concomitant Behaviors, the Client Perceptions of Severity, the Clinician Perceptions of Severity, and the Assessing Chronicity protocols. These components help to determine the severity of the individual's stuttering behavior, its etiology and development, as well as determining the nature of the cognitive and affective reactions to stuttering and the situations in which it occurs.
The strengths of the CASS-A/C are that the protocols include theoretically relevant items regarding not only the stuttering behavior, but also the secondary issues of the cognitive and affective responses to stuttering. Therapies have traditionally emphasized remediating the disfluencies rather than considering the individual's emotional and cognitive responses to stuttering. These can be quite disabling as well. Including the individual's cognitive and emotional responses to stuttering helps to provide a more complete understanding of the effect of stuttering on the client. Many of these concerns have grown out of failure of therapeutic interventions. In such cases, the client leaves the therapy with the stuttering intact and no support for the ancillary difficulties associated with disfluent speech. Cooper indicates that these individuals have chronic perseverative stuttering syndrome. Therapies should address the secondary issues related to stuttering for these individuals and should help them to develop compensatory mechanisms or other techniques for adjustment. Conversely, there are treatment programs that boast 93% initial success and 75% sustained success in which clients retain fluent speech for 1 to 4 years post-treatment (e.g., Hollins Communication Research Institute in Roanoke, Virginia).
The disadvantages of using the CASS-A/C for purposes other than organizing and summarizing are numerous. No psychometric properties were evaluated or reported for either version of the CASS. Not only will the results of the CASS be idiosyncratic to the professional doing the evaluation, there is no guidance to determine how to differentiate between a mild, moderate, severe, or a very severe problem, which the SLP must determine when administering the CASS. The mean fluency disorder score is based entirely on the values produced by the SLP, the client, and the parents/teachers. No studies of reliability or validity were conducted and no norming sample existed to determine the nature of the derived scores.
The CASS-A/C is relatively simple to use; however, the entire test is computer based with no supporting material. In fact, there is no manual. As a result, many potential users will find this device seriously sample were recruited from a large school district in Florida. Letters and questionnaires were randomly lacking. Some time and effort is required to explore and to become familiar with the various aspects of the program.
SUMMARY. The CASS-A/C should only be used as an organizational vehicle. The items are appropriate for fully comprehending the nature of the individual's stuttering difficulty. However, the SLP will need to determine the meaning of the summary information. There are no norms associated with either version of the CASS. There is not evidence that individuals with appropriate fluency would respond any differently to many of the items on the CASS as compared to individuals with disfluency. It should be emphasized that no studies of reliability or validity were reported. As a result, there is no evidence that scores from the CASS-A/C are reliable or valid. Using the CASS-A or CASS-C for any purpose other than organizing clinical interview information is certainly not appropriate or warranted and definitely is not advised.
Review of the Cooper Assessment for Stuttering Syndromes by JANET NORRIS, Professor of Communication Disorders, Louisiana State University, Baton Rouge, LA:
The Cooper Assessment for Stuttering Syndromes is a computerized program designed to assist a practitioner in conducting an evaluation of stuttering. By following the directions in the program, a fluency protocol can be completed, with the resulting data automatically tabulated and computed for means and percentage of occurrence. The data then are organized into a report complete with an analysis of the results, including judgments of severity and prognosis. The results can be printed as a report complete with a personal heading or letterhead, or stored for later reference.
Two nearly identical versions of the program are available, one for children and one for adolescents and adults. Both versions begin by entering identifying information about the individual to be assessed, including name, address, and date of birth. This is followed by six tasks designed to examine common phenomena associated with stuttering. These include assessments of speech situation avoidance reactions, feelings and attitudes regarding stuttering, the percentage of disfluencies produced during differing speech tasks, types and duration of disfluencies exhibited, extraneous and distracting behaviors produced while speaking, and a perception of the individual's disorder as judged separately by the individual and the examiner. Additional comments also can be added to the report.
The protocol requires approximately one hour to complete and requires no materials except a reading passage that can be printed from the program. Most of the protocol is completed by asking the individual to respond to questions, making choices along a 5-point scale with descriptors such as "never," "frequently," or "always." Responses are entered directly into the computer using a keyboard positioned so that the examiner can observe and talk to the individual without having to shift out of typing position.
The questions presented are those traditionally asked in a fluency evaluation and are fairly comprehensive. For example, the speech situation avoidance subtest is composed of 50 questions, ranging from very general queries (i.e., How often do you avoid speaking situations?) to specific (How often do you avoid eating in restaurants?). Most common avoidance situations are addressed, including giving directions, giving one's name on the phone, talking to inattentive people, or talking in the classroom or during job interviews. The children's version includes the questions designed to address adult situations and vice versa, for which a "not appropriate" entry can be submitted. This is the longest task on the protocol and it is likely that children will begin to show distractibility or disinterest before the final item is reached. However, there is no restriction preventing conduction of the interview across more than one session and this may be an option for children and some adults.
Feelings and attitudes are similarly addressed, but with 25 questions responded to with "agree," "undecided," or "disagree." These questions are designed to evaluate self-perception (Fluency problems are my own fault) and attitudes (Stutterers should not be called on in class). The protocol calls for the individual's initial reaction, rather than a thoughtful analysis of the statement.
The third section of the protocol assesses actual speech production using recitation, spontaneous conversation, and reading. Recitation includes repeating the nursery rhyme "Mary had a little lamb," and the pledge of allegiance, as well as repeating six sentences. As the individual speaks, the examiner clicks the mouse for each disfluency detected. The program assumes that the examiner is a professional with knowledge of stuttering, because disfluencies are not defined nor are examples given. A percentage of the number of disfluently produced words from the total number of words is calculated by the program. Reading includes oral reading of a 100-word passage and 12 sentences that the individual must complete (e.g., 5 + 5 = ______).
Finally, a short sample (2-3 minutes) of conversation is obtained and analyzed subjectively by the examiner for a judgment of the percent of occurrence of disfluencies, while each disfluency heard is recorded by clicking the mouse. The tasks are most appropriate for older children and adults, especially the reading passage that appears to be at an upper elementary level. It is surprising that a passage with a lower readability level was not selected for the children's version.
The fourth section is a checklist of extraneous or secondary behaviors that commonly accompany disfluent speech. These include facial movements such as wrinkling the forehead or losing eye contact, body movements such as arm twitches or head jerks, and abnormal breathing patterns such as shallow breaths or rapid inhalations. Any behaviors observed during the assessment or known to be characteristic of the individual are to be indicated.
The fifth section is an assessment of the types and duration of dysfluencies. Type is defined by such characteristics as insertion of sounds, words, or phrases, episodes of silent prolongation or vocalized prolongation, or sound, word, or phrase substitutions while speaking. Vocal quality, such as unusual pitch, intensity, rate, or prosody also is indicated. Duration of the average disfluent moment is measured along a scale ranging from fleeting to greater than 4 seconds, with specification of the longest duration produced during the evaluation.
The final section measures the individual's and the examiner's perception of the disfluency. The individual provides perceptions of the frequency of their own disfluency, as well as how often they consider stuttering to be a problem, and how great a problem it is in daily life. The examiner answers similar questions as well as judgments of the individual's perception of the problem compared to the actual problem as measured by the protocol. The examination ends with a checklist of 10 yes/no questions regarding common developmental and behavioral characteristics of stuttering. This includes whether the problem began in early childhood, if periods of normal fluency and control are experienced, and if the problem has persisted 10 years or more.
The results of the assessment, including tables displaying the results of the different subsections complete with totals, means, and percentages are available nearly simultaneously with the final click of the mouse. The program also provides a syndrome indication, such as "remediable stutter syndrome" and a severity rating, such as "mild" or "severe."
The Cooper protocol is based on traditional methods of assessing the observable products of stuttering, treated as a series of separate "stutter events," as well as affective, behavioral, environmental, and cognitive components of the syndrome. No manual is provided that describes the theoretical frame for the components of the protocol nor explains how items were selected. However, Cooper and Cooper's (1985) integrated view of stuttering as a complex disorder influenced by multiple factors is well established in the literature. The questions and measures are based on decades of research that have explored the areas assessed in this protocol. It would be useful to have reliability data, such as test- retest performance for a variety of individuals exhibiting a range of stuttering severity, to assess the accuracy of this computerized method for gathering data. For example, the mouse clicks that are produced as the person speaks could be distracting or could add tension resulting in greater disfluencies, especially during the first administration when the procedure is unfamiliar. Validity data are also lacking. A determination of whether the severity rating and syndrome indication resulting from the protocol are consistent with actual severity and prognosis would be an important measure of construct validity.
SUMMARY. The Cooper Assessment for Stuttering Syndromes provides an assessment of a variety of factors related to stuttering, but each is treated as a separate component. The structure of this instrument thus limits the true interaction among the factors that cause and maintain stuttering. The Cooper can be used to provide a descriptive assessment of some of the products of stuttering, and it does this very efficiently. But this is only a beginning point for assessing and treating stuttering. To conduct an adequate assessment of the syndrome, the examiner must look beyond the surface to view the complex and dynamic interactions that occur among these and many other factors. This would include an analysis of the factors that limit or increase disruptions in fluency during conversations that differ on factors that limit or increase disruptions in fluency during conversations that differ on factors such as topic of discussion and speaking partners, and consideration of the dynamics between multiple factors simultaneously impacting speech (Smith & Kelly, 1997).
REVIEWER'S REFERENCES
Cooper, E. B., & Cooper, C. S. (1985). Cooper Personalized Fluency Control Therapy--Revised. Allen, TX: DLM Teaching Resources.
Smith, A., & Kelly, E. (1997). Stuttering: A dynamic, multifactorial model. In R. F. Curlee & G. M. Siegel (Eds.), Nature and treatment of stuttering: New directions (2nd ed.). Boston: Allyn & Bacon.
Copyright
Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing. All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute, Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the University of Nebraska and may not be used without express written consent.
Update Code
20110800
|
Accession Number
TIP07002841
Review Accession Number
15022608
Classification Code
Developmental [02]
Database
Mental Measurements Yearbook and Tests In Print
Mental Measurements Yearbook
The Fifteenth Mental Measurements Yearbook 2003
Title
Young Children's Achievement Test
Acronym
YCAT
Author
Hresko, Wayne P.; Peak, Pamela K.; Herron, Shelley R.; Bridges, Deanna L.
Purpose
Designed to help determine early academic abilities.
Publisher
PRO-ED, 8700 Shoal Creek Blvd., Austin, TX 78757-6897; Telephone: 800-897-3202; FAX: 800-397-7633; E- mail: [email protected]; Web: http://www.proedinc.com
Publisher Name
PRO-ED
Publisher ID Number
PRO24262
Date of Publication
2000
Population
Ages 4-0 to 7-11
Scores
General Information, Reading, Mathematics, Writing, Spoken Language, Early Achievement Composite
Administration
Individual
Price
2011: $240 per complete kit including examiner's manual (154 pages), picture book (48 pages), 25 student response forms, and 25 profile/examiner record booklets; $75 per examiner's manual; $81 per picture book; $35 per 25 student response forms; $59 per 25 profile/examiner record booklets.
Cross References
For reviews by Russell N. Carney and Susan J. Maller, see 15:285.
Time
(25-45) minutes.
Test Description
Young Children's Achievement Test. Purpose: Designed to help determine early academic abilities. Population: Ages 4-0 to 7-11. Publication Date: 2000. Acronym: YCAT. Scores, 6: General Information, Reading, Mathematics, Writing, Spoken Language, and Early Achievement Composite. Administration: Individual. Price Data, 2011: $240 per complete kit including examiner's manual (154 pages), picture book (48 pages), 25 student response forms, and 25 profile/examiner record booklets; $75 per examiner's manual; $81 per picture book; $35 per 25 student response forms; $59 per 25 profile/examiner record booklets. Time: (25-45) minutes. Authors: Wayne P. Hresko, Pamela K. Peak, Shelley R. Herron, and Deanna L. Bridges. Publisher: PRO-ED. Cross References: For reviews by Russell N. Carney and Susan J. Maller, see 15:285.
Full Text
Review of the Young Children's Achievement Test by RUSSELL N. CARNEY, Professor of Psychology, Southwest Missouri State University, Springfield, MO:
DESCRIPTION. The Young Children's Achievement Test (YCAT) is an individually administered test designed to measure early academic abilities. Its central purpose is to aid in the identification of young children at risk for school failure. YCAT materials include the examiner's manual, a picture book, profile/examiner record booklets, and student response forms. Designed for English-speaking preschoolers, kindergartners, and first-graders (ages 4-0 through 7-11), the five subtests of the YCAT measure General Information, Reading, Mathematics, Writing, and Spoken Language. The five resultant scores can be combined to yield an Early Achievement Composite score. The YCAT can also be used to document educational progress.
The YCAT is nontimed, and the subtests can be administered in any order. Approximately 25 to 45 minutes are required for administration. An easel with color pictures (the picture book) allows for the presentation of many of the test items. The profile/examiner record booklet lists each question, and also provides one or more examples of the correct response as an aid to scoring. Items are scored as either correct (1) or incorrect (0). The examiner begins with the first item on each subtest, and continues until a ceiling is reached. The subtests are each about 20 items in length, except for the spoken language subtest, which is composed of 36 items.
Types of scores provided include raw scores, age equivalents, percentiles, and standard scores. In particular, standard scores are normalized and are based on a mean of 100 and a standard deviation of 15. Children with standard composite scores below 90 are considered at risk. Additionally, a table in the manual allows the user to convert the scores to normal curve equivalents (NCEs), z-scores, T-scores, and stanines.
DEVELOPMENT. The YCAT was developed to help identify young children at risk for academic failure. The test manual authors list eight reasons why such testing should take place. Central to their argument is the notion that it is best to identify academic problems early on in order to provide interventions that have the greatest likelihood of success. The authors acknowledge that early achievement is based on the interaction of several factors, such as physical/psychological well-being, the child's environmental experiences, informal and formal instruction, and finally, the child's intrinsic curiosity and motivation.
In developing this instrument, a large number of early childhood tests and curriculum materials were reviewed, which are listed in the manual. Based on this review, 183 test items were initially generated, of which 117 were eventually chosen for the YCAT.
Test developers conducted traditional item analysis by examining difficulty and item discrimination (using an item/total-score Pearson correlation index). Median percentages of item difficulty and discrimination are reported across age ranges and across the five subtests. The values provided appear to be acceptable.
Two differential item functioning (DIF) techniques were used as a screen to detect item bias: the logistic regression approach and the Delta Plot approach. Four groups were compared: male vs. female, European Americans vs. non-European Americans, African Americans vs. non-African Americans, and Hispanic Americans vs. non-Hispanic Americans. The logistic regression approach yielded a relatively small number of potentially biased items per group. These items were examined, and the authors did not feel that they were particularly biased. The Delta Plot approach suggested little if any bias in the items for the four comparison groups.
TECHNICAL.
Standardization. Normative data are based on 1,224 children sampled from 32 states (1996-1999). This sample was designed to be representative of the nation. The manual provides a table listing the demographic characteristics of the sample next to census figures for these areas: geographic region, gender, race, residence, ethnicity, family income, parents' educational attainment, and disability status. The sample percentages, and those of the census, seem quite comparable.
Reliability. Internal consistency was estimated using Cronbach's (1951) coefficient alpha. Values across the subtests, and at different ages, ranged from .74 (General Information, age 7) to .92 (Reading, age 4). The majority of the subtest values were in the mid- to high .80s. The Early Achievement Composite score yielded high internal consistency values: .95 to .97. The authors went on to calculate alphas for nine subgroups (e.g., males, females, African Americans, those classified as learning disabled, those classified as mentally retarded). Nearly all of these values were in the .90s for the various subtests, and ranged from .97 to .99 for the composite score-very high values. The authors argue that this suggests that the YCAT is "about equally reliable" (examiner's manual, p. 57) for the different subgroups examined.
Test-retest reliability (approximately 2-week interval) was calculated using a sample of 190 children from two different schools. The five subtests had high test-retest reliability, ranging from .97 to .99. The correlations were corrected for restriction of the range where appropriate. No value was reported for the composite score.
Interscorer reliability was estimated as follows. Two graduate students independently scored 100 completed protocols. The correlations between their scores on the five subtests ranged from .97 to .99, indicating a high degree of agreement. Again, no value was reported for the composite score.
VALIDITY.
Content validity. As described in the Development section, to build content validity into their test, the authors reviewed a number of early childhood tests and curriculum materials in order to produce test items.
Criterion-related validity. Concurrent validity was measured by correlating performance on the YCAT with a variety of other tests, including the Comprehensive Scales of Student Abilities (1994), the Kaufman Survey of Early Academic and Language Skills (1993), the Metropolitan Readiness Tests (1995), and the Gates-MacGinitie Reading Tests (1989). The numerous resultant correlations are listed in the manual, and for the most part, provide evidence for concurrent validity.
Construct validity. The manual discusses six premises related to construct validation: age differentiation, group differentiation, the YCAT's relationship to academic ability, the YCAT's relationship to intelligence, subtest interrelationships, and item validity. In the first instance, as expected, evidence is provided that the YCAT does indeed differentiate between students on the basis of age. Older students make higher scores than younger students on the various subtests. Second, group means for Whites, Blacks, Hispanics, and ADHD individuals were virtually identical, whereas means for those with learning disabilities and mental retardation were lower-as one would expect if the test was working correctly. Third, as described earlier under concurrent validity, the YCAT correlates with other achievement tests. Likewise, the YCAT correlates to some extent with the Slosson Intelligence Test for Children and Adults (1990). Correlations with the Slosson for the five YCAT subtests ranged from .44 to .73, and the correlation was .68 for the YCAT composite score. Fifth, the five subtests of the YCAT were intercorrelated, with values ranging from .57 to .71. This suggests that they tap into the same construct: academic abilities. Finally, the items of a subtest should correlate with the total score on that subtest. Item discrimination values were discussed in the test development section. The presence of good item discrimination is another piece of evidence supporting construct validity.
COMMENTARY. The test manual for the YCAT describes it as a "quick, reliable, and valid instrument to help determine early academic abilities" (examiner's manual, p. 3), and I am inclined to agree. Administration is relatively simple, and is facilitated by way of a colorful easel display format, as well as an easy-to-follow record booklet. Scoring instructions are clear (as evidenced by high interscorer reliability), and a variety of scores are reported, including percentiles and standard scores. Even though age-equivalent scores are provided, the authors of the manual rightly caution against their use.
Reliability appears to be a strength. Based on coefficient alpha reliabilities, the standard error of measurement (SEM) was only +/-3 points across the board on the composite score. On the subtests, the SEM ranged from +/-4 to 8 points, with the majority of the values being +/-5 or 6 points. I was especially impressed with the test-retest reliabilities (2-week interval), which ranged from .97 to .99. It is important to note that these were corrected for restriction of the range where appropriate. It would have been interesting to have seen the values prior to the corrections. Also, I was curious as to why no test-retest reliability was calculated for the composite score.
Likewise, validity evidence was provided for content, concurrent, and construct validity. The evidence for all three types was convincing. I was particularly taken by the efforts made to reduce bias, which seem to have been successful. For example, the mean scores on the five subtests, and the composite, are nearly identical for Anglo-European Americans, African Americans, and Hispanic Americans. Eliminating potential bias was clearly a priority during the development of this test.
Because the YCAT is designed to predict school failure, it would be worthwhile to have an estimate of the instrument's predictive validity by testing a sample of young children, and then correlating their scores with their actual school performance after 1 or 2 years. Such information was probably unavailable when the manual was published. However, it could be included when the manual is next revised.
Finally, the test manual seems to be particularly well-written. Beyond the usual chapters dealing with administration, scoring, norms, reliability, and validity, the manual includes informative sections titled "Information to Consider Before Testing," "Interpreting the YCAT Results," "Controlling for Test Bias," and "Other Important Factors Relating to Testing Children's Early Development." Throughout, the manual often refers to the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1985)-and a variety of measurement references-providing useful information and cautionary notes where appropriate. Also very important, the manual stresses that the assessment of young children should involve information from a variety of sources-not just the scores from a single test. Further, the manual advises care in interpreting scores when the test has been administered to children who speak nonstandard English, or who are bilingual.
SUMMARY. The Young Children's Achievement Test (YCAT) is an individually administered test of academic abilities for English-speaking children ages 4-0 to 7-11 years. Particular attention was paid to the Standards (AERA, APA, & NCME, 1985) in its development, and the result appears to be a technically sound test that is easy to administer and score. It should be a useful instrument for the purposes of identifying young children at risk for school failure, and for measuring academic progress.
REVIEWER'S REFERENCE
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: American Psychological Association, Inc.
Review of the Young Children's Achievement Test by SUSAN J. MALLER, Associate Professor of Educational Studies, Purdue University, West Lafayette, IN:
DESCRIPTION. The Young Children's Achievement Test (YCAT) is an individually administered test that measures academic skills, including General fund of Information, Reading, Mathematics, Writing, and Spoken Language, necessary for children in preschool through first grade. The manual states that the YCAT can be used (a) to identify whether a child's academic skills are developing normally, (b) to document progress, (c) along with other measures, and (d) in research.
The test materials include the examiner's manual, picture book (in the form of an easel), 25 student response forms, and 25 profile/examiner record booklets. Other materials needed include pencils, erasers, 1 nickel, 2 dimes, and 6 pennies. The artwork for the items in the picture book is clear and attractive. It should be noted that the technical manual is written very clearly. The authors did a good job of explaining the technical aspects of the test.
The YCAT can be used for children ages 4-0 through 7-11. The manual states that the test is appropriate for children who can understand the directions and who can "read, write, and speak English" (p. 7). However, it appears that the test also is appropriate for children at the younger ages who have prereading skills.
The YCAT should be administered by a person who has had formal training in administering and interpreting results from assessments. The manual cites the guidelines suggested by Anastasi and Urbina (1997) and also recommends that the examiner have had supervised training in using the screening tests and have practiced administering the YCAT several times with colleagues.
YCAT administration requires approximately 25-45 minutes, although the test is not timed. Breaks are allowed for children who cannot sit through the entire testing period, especially younger children.
All items are scored as correct or incorrect. All subtests have no basals and have a ceiling level of three consecutive wrong items. The examiner's manual contains clearly presented scripts for each item on each subtest. It also includes prompts and scoring criteria for each item. Item scores are summed to obtain subtest raw scores. Tables are provided for converting the raw scores to age equivalents (although the authors discourage the use of age equivalents), percentile ranks, and age-based quotients (standard scores; M = 100, SD = 15). The manual does not explain why 3-month intervals are used for all quotients except for the General Information quotients, which are based on 6-month intervals. In addition, the summed subtest standard scores also are converted to an Early Achievement Composite (EAC). A table is provided for converting quotients to a variety of other types of standard scores, including Normal Curve Equivalents, z-, T-, and standing scores. Scoring information is recorded on the YCAT Profile/Examiner Record Booklet. The record booklet also includes sections for recording identifying information (name, gender, school, grade, age, etc.) and for drawing the profile of scores. A table is provided for interpreting EAC quotients (standard scores), which includes labels from Very Poor to Very Superior with percentages of examinees falling within each label range. The manual also includes an entire section that points out the flaws and discourages the use of age-equivalent; yet, in somewhat of a contradiction, the record book includes these scores as an option for reporting test results. Tables also are provided for determining the statistical significance of discrepancies between YCAT subtests. Standard errors of measurement are presented along with reliability coefficients in a later section in the manual.
DEVELOPMENT. The manual summarizes the literature supporting the rationale concerning the assumptions and theory that underlie the YCAT. The five subtests include the following:
General Information. This subtest measures a child's general fund of information, based on an understanding of concepts and common knowledge (e.g., body parts, colors, categorization, personal data).
Reading. This subtest measures the understanding of symbols and print conventions (e.g., letter identification and sounds, reading words, and reading comprehension).
Mathematics. This subtest tests math concepts, including number identification, counting, math calculation, math problem solving.
Writing. This subtest measures the child's ability to copy and to write letters and simple words and sentences.
Spoken Language. This subtest measures language skills, including expressive and receptive vocabulary, communication, phonological awareness, etc.
Items were developed after consulting the research and numerous existing tests of related constructs, including tests and screening instruments to measure General Information (intelligence tests, basic concepts tests, and developmental scales), Reading, Mathematics, Writing, and Spoken Language. The rationale for selecting specific instruments was not stated. After finding certain "item types" that were commonly found across instruments, the authors reviewed relevant published curricula, resulting in the development of 183 items, with 117 in the final version. No information was provided concerning the use of a pilot test or tryout version. The criteria for dropping items from the final version is not stated until a later section describing the criteria for inclusion based on the results of the conventional item analysis, which included the use of item difficulty (proportions passing) and item discrimination (item- total correlation) indices. More justification for using these methods should have been provided; these indices are flawed because the (a) are highly dependent on the distributions of the given sample and (b) lack information concerning examinees at various levels of ability. Furthermore, the results are summarized in a table in the form of "median percentages of difficulty and discrimination" (manual, p. 68). This title is unclear because a percentage of difficulty or discrimination is not a coefficient typically used. Furthermore, it would have been more informative to report indices for each item, rather than summary statistics such as medians, because there is no way to determine (a) whether items are administered in order of difficulty, which is very important, given the use of ceiling rules; and (b) the distribution of the indices.
TECHNICAL. The standardization sample included 1,224 children from 32 states. The manual provides detailed information regarding the standardization process. The YCAT was administered by a variety of professionals, such as certified teachers, psychologists, speech therapists, and professors. Tests also were administered by trained paraprofessionals as well as graduate students in regular and special education. No information was provided concerning the level of training of these students or whether they were supervised. The manual provides data concerning the demographic characteristics and its representativeness of the 1997 Bureau of the Census in terms of geographical area, gender, race, urban or rural residence, ethnicity, family income, parental education level, and disability status. Raw scores were converted to age-based normalized standard scores, and distributions were smoothed.
The manual consistently uses language claiming that the YCAT "is reliable" or "is valid," even though this type of language is discouraged for current measurement practice. Furthermore, the manual quotes the following from Linn and Gronlund (1995, p. 49) on page 61, "No test is valid for all purposes. Assessment results are never just valid; they have a different degree of validity for each particular interpretation to be made."
A significant portion of the manual is spent explaining and justifying the methods used in the reliability and validity studies. To their credit, the authors do a fairly good job of backing up their methodology with citations from the measurement literature. Coefficient alphas for the five subtests are reported separately for age groups (ages 4, 5, 6, and 7) and averaged to provide a summary statistic across age groups. Average alphas ranged from .80 to .89 for the subtests and the alpha was .96 for the EAC. Standard errors of measurement also are reported. In addition, coefficient alphas are reported for ethnic, gender, and disability subgroups. Test-retest reliabilities (2-week administration interval) were based on a total of 190 children from two schools. These coefficients ranged from .97 to .99. No explanation is offered for these seemingly inflated coefficients. Interscorer reliability coefficients were based on the scores of 100 protocols completed by two advanced graduate students. These coefficients ranged from .97 to .99. Certainly, the results of two scorers is not convincing evidence of reliability across random scorers. The various reliability evidence also is presented in a summary table. However, it should be noted that this table appears to contain a major error. That is, the column that is supposed to report the subtest average alphas across age groups (which also is presented in the table with the alphas) actually reports the EAC alphas for the individual age groups.
Validity. A variety of validity evidence is reported. As already mentioned, attempts to ensure content validity were made by thoroughly reviewing the literature and related tests. Validity evidence included (a) correlations of YCAT raw scores and age that ranged from .71 to .83, (b) comparison of the means of contrast groups (although no tests of significance are reported), (c) correlations that ranged from low to high between YCAT scores and various criterion measures of academic and school-related ability, (d) correlations between the YCAT and the Slosson Intelligence Test for Children and Adults-Revised that ranged from .44 to .73, and (e) subtest intercorrelations that ranged from .57 to .71. It should be noted the rationale for selecting criterion measures was not presented, and thus, the usefulness of the YCAT for predicting meaningful outcomes (e.g., future academic performance) still is somewhat questionable. The sample sizes for the criterion-related validity studies ranged from 33 to 75 children. In addition, it appears that factor analytic studies were not done, as no results are mentioned. Even so, a table in the manual reports what items are claimed to measure. For example, some of the Reading items claim to measure "reading comprehension," whereas others claim to measure "reading/listening comprehension," "word knowledge and reading comprehension," etc., without empirical evidence to support the validity of these claims. Thus, practitioners are discouraged from interpreting the results of individual items. Furthermore, without the evidence to support a one-factor YCAT model, it is difficult to determine whether there is justification for reporting the EAC, even though the composite is reliable. Finally, although significance levels are reported for interpreting discrepancies between subtests, evidence concerning the predictive validity of these discrepancies is not provided.
Differential item functioning (DIF) was investigated using logistic regression and Delta Pilot approaches. Although the manual lists several different potential methods for investigating DIF, it does not present the rationale for selecting the two methods used. It should be noted that the Delta Plot approach has long been considered to be flawed (Bryk, 1980; Camilli & Shepard, 1994) because it is based on proportions passing, which is highly influenced by the ability distributions of the groups under study. For this reason, the logistic regression method is preferred, because it controls for ability. The manual does not state whether the method of purification was used to match examinees of equal ability. A table reports the numbers of items found to exhibit DIF at the p<.01 level. Although the manual states that the number of DIF items was well within the level of chance, at this significance level only one item should be found to exhibit DIF in each analysis. For the groups studied, the number of DIF items ranged from 2 for Hispanics versus all others to 11 for males versus females. The manual goes on to state that no items were judged to contain "indications of overt bias" (p. 71), although it does not state how this was concluded. Admittedly, DIF investigations are expensive and often are not reported for individually administered tests. The test authors have taken notable initial steps toward investigating DIF; however, more information is needed regarding the analysis, and justification is needed for the criterion used for determining DIF. In terms of test bias investigations, no evidence is presented to demonstrate that the YCAT predicts criterion measures similarly across groups. That is, no studies of differential prediction were reported.
COMMENTARY AND SUMMARY. The YCAT is an individually administered test of academic skills necessary for preschoolers through first grade. The YCAT manual is written well and easy to follow. The test materials are attractive and clearly presented. Relevant literature is cited justifying the underlying theoretical constructs of the YCAT. Reliability coefficients are impressive. The validity evidence that is presented is fairly convincing; however, further evidence is needed (a) to determine whether the YCAT predicts meaningful criterion measures, (b) regarding the internal structure of the YCAT, and (c) regarding the lack of DIF and test bias in the YCAT. Overall, the first version of the YCAT indicates that it is likely to be a promising measure of academic abilities in young children, especially when more of the above-mentioned evidence is obtained regarding the validity of the instrument.
REVIEWER'S REFERENCES
Anastasi, A., & Urbina, S. J. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.
Bryk, A. (1980). [Review of Bias in mental testing]. Journal of Educational Measurement, 17, 369-374.
Camilli, G., & Shepard, L. A. (1994). Methods for identifying biased test items. Thousand Oaks, CA: Sage.
Linn, R. L., & Gronlund, N. E. (1995). Measurement and assessment in teaching (7th ed.). Englewood Cliffs, NJ: Merrill.
Original MMY Test Citation
[15022608]
Young Children's Achievement Test.
Purpose: Designed to help determine early academic abilities.
Population: Ages 4-0 to 7-11.
Publication Date: 2000.
Acronym: YCAT.
Scores, 6: General Information, Reading, Mathematics, Writing, Spoken Language, and Early Achievement Composite.
Administration: Individual.
Price Data, 2003: $184 per complete kit including examiner's manual (154 pages), picture book (48 pages), 25 student response forms, and 25 profile/examiner record booklets; $57 per examiner's manual; $66 per picture book; $25 per 25 student response forms; $41 per 25 profile/examiner record booklets.
Time: (25-45) minutes.
Authors: Wayne P. Hresko, Pamela K. Peak, Shelley R. Herron, and Deanna L. Bridges.
Publisher: PRO-ED.
2 Reviews Available
Developmental [02]
The Fifteenth Mental Measurements Yearbook 2003
Examiner's manual, 2000, 154 pages.
Carney, Russell N. (Southwest Missouri State University); Maller, Susan (Purdue University).
Copyright
Copyright © 2011. The Board of Regents of the University of Nebraska and the Buros Center for Testing. All rights reserved. Any unauthorized use is strictly prohibited. Buros Center for Testing, Buros Institute, Mental Measurements Yearbook, and Tests in Print are all trademarks of the Board of Regents of the University of Nebraska and may not be used without express written consent.
Update Code
20110800
|