Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these models are however rare in the literature, especially in the context of cross-cultural comparisons, where ERS is even more relevant due to cultural differences between groups.
View Article and Find Full Text PDFRecently, the Urnings algorithm (Bolsinova et al., 2022, J. R.
View Article and Find Full Text PDFAdaptive learning and assessment systems support learners in acquiring knowledge and skills in a particular domain. The learners' progress is monitored through them solving items matching their level and aiming at specific learning goals. Scaffolding and providing learners with hints are powerful tools in helping the learning process.
View Article and Find Full Text PDFLog-file data from computer-based assessments can provide useful collateral information for estimating student abilities. In turn, this can improve traditional approaches that only consider response accuracy. Based on the amounts of time students spent on 10 mathematics items from the PISA 2012, this study evaluated the overall changes in and measurement precision of ability estimates and explored country-level heterogeneity when combining item responses and time-on-task measurements using a joint framework.
View Article and Find Full Text PDFAn extension to a rating system for tracking the evolution of parameters over time using continuous variables is introduced. The proposed rating system assumes a distribution for the continuous responses, which is agnostic to the origin of the continuous scores and thus can be used for applications as varied as continuous scores obtained from language testing to scores derived from accuracy and response time from elementary arithmetic learning systems. Large-scale, high-stakes, online, anywhere anytime learning and testing inherently comes with a number of unique problems that require new psychometric solutions.
View Article and Find Full Text PDFBr J Math Stat Psychol
July 2021
With advances in computerized tests, it has become commonplace to register not just the accuracy of the responses provided to the items, but also the response time. The idea that for each response both response accuracy and response time are indicative of ability has explicitly been incorporated in the signed residual time (SRT) model (Maris & van der Maas, 2012, Psychometrika, 77, 615-633), which assumes that fast correct responses are indicative of a higher level of ability than slow correct responses. While the SRT model allows one to gain more information about ability than is possible based on considering only response accuracy, measurement may be confounded if persons show differences in their response speed that cannot be explained by ability, for example due to differences in response caution.
View Article and Find Full Text PDFPeople's choices are often found to be inconsistent with the assumptions of rational choice theory. Over time, several probabilistic models have been proposed that account for such deviations from rationality. However, these models have become increasingly complex and are often limited to particular choice phenomena.
View Article and Find Full Text PDFOne of the highest ambitions in educational technology is the move towards personalized learning. To this end, computerized adaptive learning (CAL) systems are developed. A popular method to track the development of student ability and item difficulty, in CAL systems, is the Elo Rating System (ERS).
View Article and Find Full Text PDFFront Psychol
September 2019
Advances in technology hold great promise for expanding what assessments may achieve across domains. We focus on non-cognitive skills as our domain, but lessons can be extended to other domains for both the advantages and drawbacks of new technological approaches for different types of assessments. We first briefly review the limitations of traditional assessments of non-cognitive skills.
View Article and Find Full Text PDFPsychometrika
December 2019
While standard joint models for response time and accuracy commonly assume the relationship between response time and accuracy to be fully explained by the latent variables of the model, this assumption of conditional independence is often violated in practice. If such violations are present, taking these residual dependencies between response time and accuracy into account may both improve the fit of the model to the data and improve our understanding of the response processes that led to the observed responses. In this paper, we propose a framework for the joint modeling of response time and accuracy data that allows for differences in the processes leading to correct and incorrect responses.
View Article and Find Full Text PDFVarious mixture modeling approaches have been proposed to identify within-subjects differences in the psychological processes underlying responses to psychometric tests. Although valuable, the existing mixture models are associated with at least one of the following three challenges: (1) A parametric distribution is assumed for the response times that-if violated-may bias the results; (2) the response processes are assumed to result in equal variances (homoscedasticity) in the response times, whereas some processes may produce more variability than others (heteroscedasticity); and (3) the different response processes are modeled as independent latent variables, whereas they may be related. Although each of these challenges has been addressed separately, in practice they may occur simultaneously.
View Article and Find Full Text PDFThe assumption of latent monotonicity is made by all common parametric and nonparametric polytomous item response theory models and is crucial for establishing an ordinal level of measurement of the item score. Three forms of latent monotonicity can be distinguished: monotonicity of the cumulative probabilities, of the continuation ratios, and of the adjacent-category ratios. Observable consequences of these different forms of latent monotonicity are derived, and Bayes factor methods for testing these consequences are proposed.
View Article and Find Full Text PDFMultivariate Behav Res
December 2019
Linear, nonlinear, and nonparametric moderated latent variable models have been developed to investigate possible interaction effects between a latent variable and an external continuous moderator on the observed indicators in the latent variable model. Most moderation models have focused on moderators that vary across persons but not across the indicators (e.g.
View Article and Find Full Text PDFFront Psychol
September 2018
The most common process variable available for analysis due to tests presented in a computerized form is response time. Psychometric models have been developed for joint modeling of response accuracy and response time in which response time is an additional source of information about ability and about the underlying response processes. While traditional models assume conditional independence between response time and accuracy given ability and speed latent variables (van der Linden, 2007), recently multiple studies (De Boeck and Partchev, 2012; Meng et al.
View Article and Find Full Text PDFIn many applications of high- and low-stakes ability tests, a non-negligible amount of respondents may fail to reach the end of the test within the specified time limit. Since for respondents that ran out of time some item responses will be missing, this raises the question of how to best deal with these missing responses for the purpose of obtaining an optimal assessment of ability. Commonly, researchers consider three general solutions: ignore the missing responses, treat them as being incorrect, or treat the responses as missing but model the missingness mechanism.
View Article and Find Full Text PDFThis article proposes a general mixture item response theory (IRT) framework that allows for classes of persons to differ with respect to the type of processes underlying the item responses. Through the use of mixture models, nonnested IRT models with different structures can be estimated for different classes, and class membership can be estimated for each person in the sample. If researchers are able to provide competing measurement models, this mixture IRT framework may help them deal with some violations of measurement invariance.
View Article and Find Full Text PDFIn item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed.
View Article and Find Full Text PDFBr J Math Stat Psychol
February 2018
By considering information about response time (RT) in addition to response accuracy (RA), joint models for RA and RT such as the hierarchical model (van der Linden, 2007) can improve the precision with which ability is estimated over models that only consider RA. The hierarchical model, however, assumes that only the person's speed is informative of ability. This assumption of conditional independence between RT and ability given speed may be violated in practice, and ignores collateral information about ability that may be present in the residual RTs.
View Article and Find Full Text PDFIn generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal.
View Article and Find Full Text PDFLinking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures.
View Article and Find Full Text PDFWith the widespread use of computerized tests in educational measurement and cognitive psychology, registration of response times has become feasible in many applications. Considering these response times helps provide a more complete picture of the performance and characteristics of persons beyond what is available based on response accuracy alone. Statistical models such as the hierarchical model (van der Linden, 2007) have been proposed that jointly model response time and accuracy.
View Article and Find Full Text PDFThe assumption of conditional independence between response time and accuracy given speed and ability is commonly made in response time modelling. However, this assumption might be violated in some cases, meaning that the relationship between the response time and the response accuracy of the same item cannot be fully explained by the correlation between the overall speed and ability. We propose to explicitly model the residual dependence between time and accuracy by incorporating the effects of the residual response time on the intercept and the slope parameter of the IRT model for response accuracy.
View Article and Find Full Text PDFIt is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability.
View Article and Find Full Text PDFIn this paper test equating is considered as a missing data problem. The unobserved responses of the reference population to the new test must be imputed to specify a new cutscore. The proportion of students from the reference population that would have failed the new exam and those having failed the reference exam are made approximately the same.
View Article and Find Full Text PDFBr J Math Stat Psychol
February 2016
An important distinction between different models for response time and accuracy is whether conditional independence (CI) between response time and accuracy is assumed. In the present study, a test for CI given an exponential family model for accuracy (for example, the Rasch model or the one-parameter logistic model) is proposed and evaluated in a simulation study. The procedure is based on the non-parametric Kolmogorov-Smirnov tests.
View Article and Find Full Text PDF