您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[ACT]:Differential Performance on a Direct Measure of Writing Skills for Black and White College Freshmen - 发现报告
当前位置:首页/行业研究/报告详情/

Differential Performance on a Direct Measure of Writing Skills for Black and White College Freshmen

文化传媒2014-09-12ACT罗***
Differential Performance on a Direct Measure of Writing Skills for Black and White College Freshmen

ACT Research Report Series89-8Differential Performance on a Direct Measure of Writing Skills for Black and White College FreshmenCatherine Welch Allen Doolittle Joyce McLartyNovember 1989 For additional copies write: ACT Research Report Series P.O. Box 168 Iowa City, Iowa 52243©1989 by The American College Testing Program. All rights reserved. Differential Performance on a Direct Measure of Writing Skills for Black and White College FreshmenCatherine Welch Allen Doolittle Joyce McLarty \ ABSTRACTThe purpose of this study was to examine the differential performance for black and white college freshman found on a direct measure of writing skills. The direct measure consisted of responses to two individual prompts each requiring twenty minutes of testing time. Each essay was scored by two independent raters. Analysis of the data indicated that black examinees did not perform as well as white examinees on the essay test. The differences between the populations on the essay were similar in magnitude to differences found on a multiple-choice test of writing skills. ► DIFFERENTIAL PERFORMANCE ON A DIRECT MEASURE OF WRITING SKILLS FOR BLACK AND WHITE COLLEGE FRESHMENThe direct assessment of writing skills continues to be a central issue in education. Meredith and Williams (1984) reported that the direct assessment of writing is a major aspect of many state policies on testing. A recent issue of Educational Measurement: Issues and Practices was devoted to theassessment of writing. Recent studies of instruction have shown that schools are giving more attention to writing instruction for high school juniors and seniors (NAEP, 1986). This additional emphasis on direct writing assessment raises the question of what effect direct writing assessment has for various population subgroups. "NAEP results suggest that across the 10-year period from 1974 to 1984, trends in student achievement were much the same for many population subgroups. At ages 13 and 17, Black, Hispanic and White students showed relatively parallel trends in performance, with inconsistent trends or declines between 1974 and 1979 and gains from 1979 to ?9S4 (p. 6)."Breland and Griswold (1981) concluded that black students at a given score level on a traditional college entrance test tended to write less well than the average white student at the same score level. White and female students tended to write better overall. Breland and Jones (1982) confirmed these earlier findings using the Cleary (1968) definition of bias, which stated "a test is biased if the criterion score predicted from the common regression line is consistently too high or too low for members of the subgroup (p.115)." They concluded: ’’multiple-choice scores predict essay scoressimilarly for all groups sampled, with the exception that lower-performing groups tend to be overestimated by multiple-choice scores. Analyses showed that women consistently wrote more superior essays than would be predicted by the multiple-choice measures and that men and minority groups wrote fewer superior essays than would be predicted (p. 21)." 2White (1985) reports results which suggest that a direct measure of writing may be fairer than a multiple choice test for ethnic minorities. "The TSWE, a conventional multiple-choice usage test, does not distribute the scores of minority students the way trained and accurate evaluators do. A carefully devised and properly scored essay test seems not to contain the same problem of validity for minorities (p. 81)."This study will explore the following questions: 1) What are thedifferences between black and white examinee performance on a direct measure of writing skills? 2) Are these differences comparable in magnitude to objective test relationships? 3) What characteristics of the test contribute the most (or least) to these differences?MethodologyThe InstrumentThe American College Testing Program (ACT) has been working to develop a test which will directly measure upper-level (college) writing proficiency by means of student writing samples. Each form of the CAAP Writing (Essay) Test consists of two independent writing prompts, each administered within 20 minutes. The two prompts involve different issues and audiences, but each requires the examinee to formulate a clear thesis; support the thesis with an argument or reasons relevant to the issue, position taken, and audience; and present the argument in a well-organized, logical manner in writing appropriate to the conventions of standard English.Each examinee received two scores per response to a prompt. A "purpose” score reflected how well the examinees responded to the task required by situations described in the writing prompt. A "language-usage" score reflected the raters’ impressions of the relative presence of usage or mechanical errors and the degree to which such errors impeded the flow of 3thought in the essays. Each paper was scored on a 4-point scale by each of two