Abstract
In computerized adaptive testing (CAT), item parameter estimates are assumed to be known and valid for all the positions that items can be presented on in the test. This assumption is problematic since item parameter estimates were shown to depend on the position in the test. Neglecting item position effects in CAT administration would cause inefficient item selection and biased ability estimation. As a solution, a simple procedure accounting for item position effects is suggested. In this procedure, potential item position effects are identified by fitting and comparing a series of item response theory models with increasing complexity of item position effects. The proposed procedure is illustrated using empirical calibration data from three adaptive tests (N = 1 632). Test-specific item position effects were identified. By accounting for item position effect with an appropriate model, overestimations of variance and reliability were avoided. The implementation of item position effects in operational adaptive tests is explained.
This article does not exactly replicate the final version published in the journal Diagnostica. It is not a copy of the original published article and is not suitable for citation.