Published online by Cambridge University Press: 01 January 2025
The recent surge of interests in cognitive assessment has led to the development of cognitive diagnosis models. Central to many such models is a specification of the Q-matrix, which relates items to latent attributes that have natural interpretations. In practice, the Q-matrix is usually constructed subjectively by the test designers. This could lead to misspecification, which could result in lack of fit of the underlying statistical model. To test possible misspecification of the Q-matrix, traditional goodness of fit tests, such as the Chi-square test and the likelihood ratio test, may not be applied straightforwardly due to the large number of possible response patterns. To address this problem, this paper proposes a new statistical method to test the goodness fit of the Q-matrix, by constructing test statistics that measure the consistency between a provisional Q-matrix and the observed data for a general family of cognitive diagnosis models. Limiting distributions of the test statistics are derived under the null hypothesis that can be used for obtaining the test p-values. Simulation studies as well as a real data example are presented to demonstrate the usefulness of the proposed method.
Electronic supplementary material The online version of this article (https://doi.org/10.1007/s11336-018-9629-6) contains supplementary material, which is available to authorized users.
 tables.British Journal of Mathematical and Statistical Psychology, 59(1),173–194.CrossRefGoogle Scholar
 tables.British Journal of Mathematical and Statistical Psychology, 59(1),173–194.CrossRefGoogle Scholar -matrix based diagnostic classification models.Journal of the American Statistical Association, 110,850–866.CrossRefGoogle Scholar
-matrix based diagnostic classification models.Journal of the American Statistical Association, 110,850–866.CrossRefGoogle Scholar -matrix in cognitive diagnosis.Applied Psychological Measurement, 37,598–618.CrossRefGoogle Scholar
-matrix in cognitive diagnosis.Applied Psychological Measurement, 37,598–618.CrossRefGoogle Scholar -matrix validation for the DINA model: Development and applications.Journal of Educational Measurement, 45 343–362.CrossRefGoogle Scholar
-matrix validation for the DINA model: Development and applications.Journal of Educational Measurement, 45 343–362.CrossRefGoogle Scholar -matrix validation.Psychometrika, 81(2),253–273.CrossRefGoogle Scholar
-matrix validation.Psychometrika, 81(2),253–273.CrossRefGoogle Scholar -matrix via a bayesian extension of the DINA model.Applied Psychological Measurement, 36(6),447–468.CrossRefGoogle Scholar
-matrix via a bayesian extension of the DINA model.Applied Psychological Measurement, 36(6),447–468.CrossRefGoogle Scholar -matrix.Applied Psychological Measurement, 36(7),548–564.CrossRefGoogle Scholar
-matrix.Applied Psychological Measurement, 36(7),548–564.CrossRefGoogle Scholar -matrix.Bernoulli, 19(5A),1790–1817.Google Scholar
-matrix.Bernoulli, 19(5A),1790–1817.Google Scholar contingency tables: A unified framework.Journal of the American Statistical Association, 100(471),1009–1020.CrossRefGoogle Scholar
 contingency tables: A unified framework.Journal of the American Statistical Association, 100(471),1009–1020.CrossRefGoogle Scholar -matrix misspecification on parameter estimates and misclassification rates in the dina model.Educational and Psychological Measurement, 68 78–98.CrossRefGoogle Scholar
-matrix misspecification on parameter estimates and misclassification rates in the dina model.Educational and Psychological Measurement, 68 78–98.CrossRefGoogle Scholar