您的浏览器禁用了JavaScript(一种计算机语言,用以实现您与网页的交互),请解除该禁用,或者联系我们。[ACT]:The Use of Unidimensional Item Parameter Estimates of Multidimensional Items in Adaptive Testing - 发现报告
当前位置:首页/行业研究/报告详情/

The Use of Unidimensional Item Parameter Estimates of Multidimensional Items in Adaptive Testing

文化传媒2014-09-15ACT从***
The Use of Unidimensional Item Parameter Estimates of Multidimensional Items in Adaptive Testing

ACT Research Report Series87-13The Use of Unidimensional Item Parameter Estimates of Multidimensional Items in Adaptive TestingTerry AckermanSeptember 1987 For additional copies write: ACT Research Report Series P.O. Box 168 Iowa City, Iowa 52243©1988 by The American College Testing Program. All rights reserved. THE USE OF UNIDIMENSIONAL ITEM PARAMETER ESTIMATES OF MULTIDIMENSIONAL ITEMS IN ADAPTIVE TESTINGTerry A. Ackerman ABSTRACTThe purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional IRT framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant throughout the entire unidimensional ability scale (Reckase, Carlson, Ackerman, & Spray, 1986). Results of this study suggest that univariate calibration of multidimensional data tends to "filter" out the multidimensionality. The closer an item's multidimensional composite aligns itself with the calibrated univariate ability scale's orientation, the larger the estimated discrimination parameter. If CAT item selection is based upon the amount of information an item provides, items requiring similar (0lf 02) composites will most often be selected.These results further imply that in a CAT different abilities throughout the 0 t, 0 2 plane could receive sets of items that discriminate between 0 L and 02 to different degrees. Also, different abilities along the mapped univariate scale could receive tests having different proportions of item content. The Use of Unidimensional Item Parameter Estimates of Multidimensional Items in Adaptive TestingMost item response theory models assume that an examinee’s test perfor­mance can be explained by a single ability or latent trait. That is, an examinee's position in the latent ability space can be determined by measuring a single ability dimension. However, one might suspect that this assumption is rarely met because there are many cognitive factors that may account for an individual's response to an item (Traub, 1983). For a group of individuals, it is doubtful that a single cognitive skill, or constant combination of skills, would be used by each person to respond to a single item. It is even more highly suspect that this assumption of unidimensionality would be met for a group of individuals responding to an entire test.Reckase, Carlson, Ackerman, & Spray (1986) have shown that for generated two dimensional data, where difficulty and dimensionality of the items are confounded (e.g., easy items measure only ability 1 and difficult items measure only ability 2), the unidimensional ability estimation scale is related to different composites of the two abilities at different points on the unidimensional ability scale. Specifically, they reported that for the particular confounding of ability and difficulty used, the examinees in upper LOGIST estimated ability deciles differed mainly on 02 while those in the lower deciles differed mainly on 0j.If these results are generalizable to real achievement test items, it could have a profound effect on the application of computer adaptive testing (CAT). If an adaptive test item pool is composed of items which require different composites of ability to answer correctly, low ability and high ability individuals may be administered two sets of items that measure com­pletely different combinations of skill. 2The unidimensional item characteristics are thought to be a function of the alignment of the ability scale in the multidimensional ability space. This alignment or orientation is strongly influenced by the pattern of the multidimen­sional test information over the 0j, 02 plane. That is, if more information (e.g. more discrimination) is provided along the 0 t axis, calibration using a univariate IRT model would orient the univariate ability scale along the 0 L axis. If a dataset provided uniform information throughout the ability plane, both dimensions should be represented equally well and the univariate scale should be mapped at a 45° angle between the 0 t, 02 axes. However, if the individual items differed in the composite of abilities needed for a correct response, tests composed of different items could have different orientations in the 0 ;, 0 2 plane. How the calibrated univariate scale is positioned in the plane may affect how different locations in the plane are mapped onto the scale.Samejima (1978) has suggested that univaria