Background: There are little scientific data on fully automated Peer Assessment Rating (PAR); this study compares a number of PAR scoring methods to assess their reliability.
Objectives: This investigation evaluated PAR scores of plaster, 3D printed, and virtual digital models scored by specialist orthodontists, dental auxiliaries, undergraduate dental students,and using a fully automated method.
Materials And Methods: Twelve calibrated assessors determined the PAR score of a typodont and this score was used as the gold standard. Measurements derived from a plaster model, a 3D printed model, and a digital model, were compared. A total of 120 practitioners (specialist orthodontists, dental auxiliaries, and undergraduate dental students, n = 40 each) scored the models (n = 10) per group. The digital models were scored twice, using OnyxCeph (OnyxCeph) and OrthoAnalyzer (3Shape). The fully automated PAR scoring was performed with Model+ (Carestream Dental).
Results: Neither type of model (P = 0.077), practitioner category (P = 0.332), nor interaction between the two (P = 0.728) showed a statistically significant effect on PAR scoring. The mean PAR score and standard deviation were comparable for all models and groups except the automated group, where the standard deviation was the smallest (SD = 0). Overall, the greatest variation was observed for weighted overjet and contact point displacements.
Conclusions: PAR scoring using plaster, 3D printed, and digital study models by orthodontists, dental auxiliaries, dental students, and a fully automated method produced very similar results and can hence be considered equivalent. Automated measurements improve repeatability compared with all groups of practitioners, but this did not reach the significance level.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1093/ejo/cjac025 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!