Background: Severity-specific guidelines based on the Pediatric Respiratory Assessment Measure (PRAM), a validated clinical score, reduce pediatric asthma hospitalization rates.
Objective: To develop, pretest the educational value of and revise an electronic learning module to train health care professionals on the use of the PRAM.
Methods: The respiratory efforts of 32 children with acute asthma were videotaped and pulmonary auscultation was recorded.
Background: Clinical reasoning is the cornerstone of medical practice. To date, there is no established framework regarding clinical reasoning difficulties, how to identify them, and how to remediate them.
Aim: To identify the most common clinical reasoning difficulties as they present in residents' patient encounters, case summaries, or medical notes.
Survey questionnaires are among the most used data gathering techniques in the social sciences researchers' toolbox and many factors can influence respondents' answers on items and affect data validity. Among these factors, research has accumulated which demonstrates that verbal and numeric labels associated with item's response categories in such questionnaire may influence substantially the way in which respondents operate their choices within the proposed response format. In line with these findings, the focus of this article is to use Andrich's Rating scale model to illustrate what kind of influence the quantifier adverb "totally," used to label or emphasize extreme categories, could have on respondents' answers.
View Article and Find Full Text PDFWhether paper and pencil or computerized adaptive, tests are usually described by a set of rules managing how they are administered: which item will be first, which should follow any given item, when to administer the last one. This article focus on the latter and looks at the effect of two stopping rules on the estimated sampling distribution of the ability estimate in a CAT: the number of items administered and the a priori determined size of the standard error of the ability estimate.
View Article and Find Full Text PDFQuestionnaire-based inquiries make it possible to obtain data rather quickly and at relatively low cost, but a number of factors may influence respondents' answers and affect data's validity. Some of these factors are related to the individuals and the environment, while others are directly related to the characteristics of the questionnaire and its items: the text introducing the questionnaire, the order in which the items are presented, the number of responses categories and their labels on the proposed scale and the wording of items. The focus of this article is on this last point and its goal is to show how the developments of diagnostic features surrounding Rasch modelling can be used to study the impact of item wording in opinion/perception questionnaires on the responses obtained and on the location of anchor points of the item response scale.
View Article and Find Full Text PDFIn a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.
View Article and Find Full Text PDF