您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[ACT]:Differential Validity in the ACT Tests - 发现报告
当前位置:首页/行业研究/报告详情/

Differential Validity in the ACT Tests

文化传媒2014-09-15ACT我***
Differential Validity in the ACT Tests

ACT RESEARCH REPORTNo. 3030August, 1969co■I—in•r—>Q4->CQ.Or—0>>Qu American College Testing ProgramP. 0, Box 168, Iowa City, Iowa 52240 DIFFERENTIAL VALIDITY IN THE ACT TESTS NANCY S. COLE SummaryThe differential validity of subject area tests of academic ability is investigated. Principal components analyses of test scores, high school grades, and college grades in English, math, social studies, and natural sciences show a dominant general ability dimension and a consistent configuration of subject areas on second and third dimensions.Data from approximately 250 colleges yield correlations of subject area college grades with subject area test scores on the American College Tests and with high school grades. A criterion of differential validity is proposed and calculated for the ACT tests and high school grades in predicting college grades. The moderate differential validity found is interpreted in terms of the first analysis. Differential Validity in the ACT TestsNancy S. Cole*Despite the successes of standardized tests of academic ability, one area has remained a problem. This is the area of differential prediction. The ease with which tests have predicted overall academic success has led to the demand for more specific tests to differentiate ability in various academic areas.Because of the persuasive content validity of many of these subject area tests, verification of their differential validity has too often been ignored. For example, in his review of the College Entrance Examination Board (CEEB) admissions testing program, Fricke (1965) criticized the Scholastic Aptitude Test (SAT) and the CEEB achievement tests for their lack of differential validity and also noted the relatively little research evidence available.The purpose of this paper is to investigate the differential validity of one commonly used college admissions test, the American College Test (ACT). Differential validity is of special concern because the relative scores on the four ACT tests in English, mathematics, social studies, and natural sciences are often used for evaluating a student's relative abilities in the four subject areas.Two important aspects of predictor, x, and criterion, y, behavior are related to differential validity. The first is the degree of the correlations among the variables within the predictor and criterion sets (Guilford, 1956; Thorndike, 1950; Wesman and Bennett, 1951; etc.). When these correlations, rXj,Xj and ryi y., are high, the predictors and criteria have little independent variance. Thus when x, predicts y, xj also tends to predict it. Similarly, when y; is predicted by x, then yj also tends to be predicted by x. The relatedness of the ACT tests and high school grades, the predictors, and of college grades, the criteria, is considered in Study 1.A second and more direct indicator of differential validity comes from the comparison of the correlations rx yj with rx.y^ for the set of predictors (Brogden, 1951; Cronbach, 1960; Horst, 1954; Mollenkopf, 1950; etc.). If x correlates positively with yj but little or negatively with y^, then x is a suitable differential predictor for yj and y^. In Study 2, these rXy correlations are collected for both ACT tests and high school grades as x and for college grades as y.Finally, in Study 3, a criterion of differential validity suitable to the differential use of test scores and high school grades is presented. Using data presented in the first two studies, we then evaluated the differential validity of the ACT tests and of high school grades according to the proposed criterion.;The author is indebted to James M. Richards, Jr. and Leo A. Munday for their helpful suggestions. -2-Study 1As already noted, differential prediction is limited by similarities among the criteria to be predicted and among the predictors. Thus, to evaluate and understand the amount of differential validity in tests of academic ability and in high school grades for predicting college grades differentially, we must first understand the degree of relatedness of the predictor variables and of the criteria.Data. The American College Testing Program provides research services to ACT-participating colleges. Included in the Standard Research Service analyses are correlations among the college grades in four subject areas (English—E, math—M, social studies—SS, and natural sciences—NS) which the colleges have reported. These correlations were collected forapproximately 100 colleges participating in 1968 T a b le !with a combined N of over 20,000 for eachcorrelation. The average of Fisher's z-transfor- Cor