Objective: Post-surgical lip symmetry assessment is a key indicator of cleft repair success. Traditional methods rely on distances between anatomical landmarks, which are impractical for video analysis and overlook texture and appearance. We propose an artificial intelligence (AI) approach to automate this process, analyzing lateral lip morphology for a quantitative symmetry evaluation.
Design: We utilize contrastive learning to quantify lip symmetry by measuring the similarity between the representations of the sides, which is subsequently used to classify the severity of asymmetry. Our model does not require patient images for training. Instead, we introduce dissimilarities in face images from open datasets using two methods: temporal misalignment for video frames and face transformations to simulate lip asymmetry observed in the target population. The model differentiates the left and right image representations to assess asymmetry. We evaluated our model on 146 images of patients with repaired cleft lip.
Results: The deep learning model trained with face transformations categorized patient images into five asymmetry levels, achieving a weighted accuracy of 75% and a Pearson correlation of 0.31 with medical expert human evaluations. The model utilizing temporal misalignment achieved a weighted accuracy of 69% and a Pearson correlation of 0.27 for the same classification task.
Conclusions: We propose an automated approach for assessing lip asymmetry in patients with repaired cleft lip by transforming facial images of control subjects to train a deep learning model, eliminating manual anatomical landmarks. Our promising results provide a more efficient and objective tool for evaluating surgical outcomes.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1177/10556656241312730 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!