Introduction: The validity, reliability and inter-method agreement of Peer Assessment Scores (PAR) from acrylic models and their digital analogues were assessed.

Method: Ten models of different occlusions were digitised, using a 3 Shape R700 laser scanner (Copenhagen, Denmark). Each set of models was conventionally and digitally PAR-scored twice in random order by 10 examiners. The minimum time between repeat measurements was two weeks. The repeatability was assessed by applying Carstensen's analysis. Inter-method agreement (IEMA) was assessed by Carstensen's limit of agreement (LOA).

Results: Intra-examiner repeatability (IER) for the unweighted and weighted data was slightly better for the conventional rather than the digital models. There was a slightly higher negative bias of -1 .62 for the weighted PAR data for the digital models. IEMA for the overall weighted data ranged from -8.70 - 5.45 (95% Confidence Interval, CI). Intra-class Correlation Coefficients lICC) for the weighted data for conventional, individual and average scenarios were 0.955 0.906 - 0.986 CI), 0.998 (0.995 - 0.999 CII. ICC for the weighted digital data, individual and average scenarios were 0.99 (0.97 - 1.00) and 1.00. The percentage reduction required to achieve an optimal occlusion increased by 0.4% for the digital scoring of the weighted data.

Conclusion: Digital PAR scores obtained from scanned plastic models were valid and reliable and, in this context, the digital semi-automated method can be used interchangeably with the conventional method of PAR scoring.

Download full-text PDF

Source

Publication Analysis

Top Keywords

weighted data
12
peer assessment
8
digital
8
inter-method agreement
8
digital models
8
individual average
8
average scenarios
8
models
7
weighted
6
par
5

Similar Publications

Want AI Summaries of new PubMed Abstracts delivered to your In-box?

Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!