Abstract
The Norwegian Agency for Quality Assurance in Education’s student self-reported learning outcomes measure assesses the learners’ satisfaction with their learning outcomes using ten generic items. The aggregate scores are used to offer relevant comparative information concerning these indicators, to institutions offering higher education, applicants to higher education, the government, students, and other educational stakeholders. To draw valid inferences from this construct, especially concerning the comparability of higher education study programs, an inspection of the psychometric properties, including the validity of the measure, is necessary. Based on the current use of the scores a unidimensional model is expected to fit the data. Therefore, confirmatory factor analysis is conducted using robust maximum likelihood to determine the extent to which data from the 2018 cycle of the student survey revealed the learning outcomes measures’ structure. The plausibility of comparability claims is evaluated using measurement invariance tests done across four selected study program types. A study sample composed of respondents from the nursing program (2194), business and administration program (2952), teacher education (1032) and engineering program (1310), who answered all items on the learning outcomes measure was used. The implied unidimensional model was not supported. The data supported a modified single-factor model. Multigroup confirmatory factor analysis test results supported configural and metric invariance, indicating equivalence of the latent concept and structure across the four study groups. Full scalar invariance was not achieved, however, after releasing intercept equality constraints of three items, partial scalar invariance was achieved. Accurate and valid measurement of learning outcomes is crucial for people that depend on the scale to make important decisions. These findings can be used to initiate a revision of the scale to ensure confident comparisons across groups.