Publications by authors named "Ibrahim S Bayrakdar"

The effect of rapid maxillary expansion (RME) on the nasal and pharyngeal airways in children remains uncertain. This retrospective study utilized computational fluid dynamics (CFD) to assess the changes in ventilation parameters caused by RME in children. Pre- and post-RME cone beam computed tomography (CBCT) images of 20 patients (4 males, mean age 13 ± 2 years) treated with RME for maxillary transverse insufficiency were evaluated.

View Article and Find Full Text PDF

Objectives: The nasopalatine canal (NPC) is an anatomical formation with varying morphology. NPC can be visualized using the cone-beam computed tomography (CBCT). Also, CBCT has been used in many studies on artificial intelligence (AI).

View Article and Find Full Text PDF

There are various challenges in the segmentation of anatomical structures with artificial intelligence due to the different structural features of the relevant region/tissue. The aim of this study was to detect the nasolacrimal canal (NLC) using the nnU-Net v2 convolutional neural network (CNN) model in cone beam-computed tomography (CBCT) images and to evaluate the successful performance of the model in automatic segmentation. CBCT images of 100 patients were randomly selected from the data archive.

View Article and Find Full Text PDF

Objectives: The current study aimed to automatically detect tooth presence, tooth numbering, and types of periodontal bone defects from CBCT images using a segmentation method with an advanced artificial intelligence (AI) algorithm.

Methods: This study utilized a dataset of CBCT volumes collected from 502 individual subjects. Initially, 250 CBCT volumes were used for automatic tooth segmentation and numbering.

View Article and Find Full Text PDF

Background: We explored whether the feature aggregation and refinement network (FARNet) algorithm accurately identified posteroanterior (PA) cephalometric landmarks.

Methods: We identified 47 landmarks on 1,431 PA cephalograms of which 1,177 were used for training, 117 for validation, and 137 for testing. A FARNet-based artificial intelligence (AI) algorithm automatically detected the landmarks.

View Article and Find Full Text PDF

Objectives: Accurately identification and tooth numbering on radiographs is essential for any clinicians. The aim of the present study was to validate the hypothesis that Yolov5, a type of artificial intelligence model, can be trained to detect and number teeth in periapical radiographs.

Materials And Methods: Six thousand four hundred forty six anonymized periapical radiographs without motion-related artifacts were randomly selected from the database.

View Article and Find Full Text PDF
Article Synopsis
  • This study explores the use of automated deep learning (DL) systems to improve the segmentation of maxillary sinuses and their associated diseases in Cone-Beam Computed Tomography (CBCT) images, which can benefit surgical planning for physicians.
  • The modified YOLOv5x architecture with transfer learning was utilized to analyze a dataset of 307 anonymized CBCT images, identifying conditions such as mucous retention cysts and mucosal thickenings.
  • Results showed high accuracy in segmentation, with F1 scores indicating successful detection of both healthy and diseased maxillary sinuses using the AI model.
View Article and Find Full Text PDF

Dental fillings, frequently used in dentistry to address various dental tissue issues, may pose problems when not aligned with the anatomical contours and physiology of dental and periodontal tissues. Our study aims to detect the prevalence and distribution of normal and overhanging filling restorations using a deep CNN architecture trained through supervised learning, on panoramic radiography images. A total of 10480 fillings and 2491 overhanging fillings were labeled using CranioCatch software from 2473 and 1850 images, respectively.

View Article and Find Full Text PDF
Article Synopsis
  • This study looked at how well a deep learning system can identify the stages of tooth development using special images called panoramic radiographs from kids aged 5 to 14.
  • They used a computer model called YOLOv5 to analyze 1500 images, finding out how many times it got the right answers and how many mistakes it made.
  • The results showed that the model was really good at finding the correct stages of tooth development, which can help dentists make better decisions for treating kids' dental needs.
View Article and Find Full Text PDF

Objective: To investigate the effectiveness of using YOLO-v5x in detecting fixed prosthetic restoration in panoramic radiographs.

Study Design: Descriptive study. Place and Duration of the Study: Department of Oral and Maxillofacial Radiology, Eskisehir Osmangazi University, Eskisehir, Turkiye from November 2022 to April 2023.

View Article and Find Full Text PDF

Introduction: Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images.

Materials And Methods: Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic.

View Article and Find Full Text PDF

Objectives: This study aimed to assess the effectiveness of deep convolutional neural network (CNN) algorithms for the detecting and segmentation of overhanging dental restorations in bitewing radiographs.

Methods: A total of 1160 anonymized bitewing radiographs were used to progress the artificial intelligence (AI) system for the detection and segmentation of overhanging restorations. The data were then divided into three groups: 80% for training (930 images, 2399 labels), 10% for validation (115 images, 273 labels), and 10% for testing (115 images, 306 labels).

View Article and Find Full Text PDF

Objectives: In the interpretation of panoramic radiographs (PRs), the identification and numbering of teeth is an important part of the correct diagnosis. This study evaluates the effectiveness of YOLO-v5 in the automatic detection, segmentation, and numbering of deciduous and permanent teeth in mixed dentition pediatric patients based on PRs.

Methods: A total of 3854 mixed pediatric patients PRs were labelled for deciduous and permanent teeth using the CranioCatch labeling program.

View Article and Find Full Text PDF

Objectives: The purpose of this study was to evaluate the effectiveness of dental caries segmentation on the panoramic radiographs taken from children in primary dentition, mixed dentition, and permanent dentition with Artificial Intelligence (AI) models developed using the deep learning method.

Methods: This study used 6075 panoramic radiographs taken from children aged between 4 and 14 to develop the AI model. The radiographs included in the study were divided into three groups: primary dentition (n: 1857), mixed dentition (n: 1406), and permanent dentition (n: 2812).

View Article and Find Full Text PDF

Background: The purpose of this study was to investigate the morphology of maxillary first premolar mesial root concavity and to analyse its relation to periodontal bone loss (BL) using cone beam computed tomography (CBCT) and panoramic radiographs.

Methods: The mesial root concavity of maxillary premolar teeth was analysed via CBCT. The sex and age of the patients, starting position and depth of the root concavity, apicocoronal length of the concavity on the crown or root starting from the cementoenamel junction (CEJ), total apicocoronal length of the concavity, amount of bone loss both in CBCT images and panoramic radiographs, location of the furcation, length of the buccal and palatinal roots, and buccopalatinal cervical root width were measured.

View Article and Find Full Text PDF

Objectives The aim of this artificial intelligence (AI) study was to develop a deep learning algorithm capable of automatically classifying periapical and bitewing radiography images as either periodontally healthy or unhealthy and to assess the algorithm's diagnostic success. Materials and methods The sample of the study consisted of 1120 periapical radiographs (560 periodontally healthy, 560 periodontally unhealthy) and 1498 bitewing radiographs (749 periodontally healthy, 749 periodontally ill). From the main datasets of both radiography types, three sub-datasets were randomly created: a training set (80%), a validation set (10%), and a test set (10%).

View Article and Find Full Text PDF

This study aims to evaluate the effectiveness of employing a deep learning approach for the automated detection of pulp stones in panoramic imaging. A comprehensive dataset comprising 2409 panoramic radiography images (7564 labels) underwent labeling using the CranioCatch labeling program, developed in Eskişehir, Turkey. The dataset was stratified into three distinct subsets: training ( = 1929, 80% of the total), validation ( = 240, 10% of the total), and test ( = 240, 10% of the total) sets.

View Article and Find Full Text PDF

Background: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future.

Methods: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions.

View Article and Find Full Text PDF

Objective: The aim of this study is to assess the efficacy of employing a deep learning methodology for the automated identification and enumeration of permanent teeth in bitewing radiographs. The experimental procedures and techniques employed in this study are described in the following section.

Study Design: A total of 1248 bitewing radiography images were annotated using the CranioCatch labeling program, developed in Eskişehir, Turkey.

View Article and Find Full Text PDF

One of the most common congenital anomalies of the head and neck region is a cleft lip and palate. This retrospective case-control research aimed to compare the maxillary sinus volumes in individuals with bilateral cleft lip and palate (BCLP) to a non-cleft control group. The study comprised 72 participants, including 36 patients with BCLP and 36 gender and age-matched control subjects.

View Article and Find Full Text PDF

Objectives: The study aims to develop an artificial intelligence (AI) model based on nnU-Net v2 for automatic maxillary sinus (MS) segmentation in cone beam computed tomography (CBCT) volumes and to evaluate the performance of this model.

Methods: In 101 CBCT scans, MS were annotated using the CranioCatch labelling software (Eskisehir, Turkey) The dataset was divided into 3 parts: 80 CBCT scans for training the model, 11 CBCT scans for model validation, and 10 CBCT scans for testing the model. The model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.

View Article and Find Full Text PDF

Objective: The aim of this study is to determine if a deep learning (DL) model can predict the surgical difficulty for impacted maxillary third molar tooth using panoramic images before surgery.

Materials And Methods: The dataset consists of 708 panoramic radiographs of the patients who applied to the Oral and Maxillofacial Surgery Clinic for various reasons. Each maxillary third molar difficulty was scored based on dept (V), angulation (H), relation with maxillary sinus (S), and relation with ramus (R) on panoramic images.

View Article and Find Full Text PDF

Background: This retrospective study aimed to develop a deep learning algorithm for the interpretation of panoramic radiographs and to examine the performance of this algorithm in the detection of periodontal bone losses and bone loss patterns.

Methods: A total of 1121 panoramic radiographs were used in this study. Bone losses in the maxilla and mandibula (total alveolar bone loss) (n = 2251), interdental bone losses (n = 25303), and furcation defects (n = 2815) were labeled using the segmentation method.

View Article and Find Full Text PDF

The objective of this study is to use a deep-learning model based on CNN architecture to detect the second mesiobuccal (MB2) canals, which are seen as a variation in maxillary molars root canals. In the current study, 922 axial sections from 153 patients' cone beam computed tomography (CBCT) images were used. The segmentation method was employed to identify the MB2 canals in maxillary molars that had not previously had endodontic treatment.

View Article and Find Full Text PDF