Malignant thyroid nodules are closely linked to cancer, making the precise classification of thyroid nodules into benign and malignant categories highly significant. However, the subtle differences in contour between benign and malignant thyroid nodules, combined with the texture features obscured by the inherent noise in ultrasound images, often result in low classification accuracy in most models. To address this, we propose a Bidirectional Interaction Directional Variance Attention Model based on Increased-Transformer, named IFormer-DVNet. This paper proposes the Increased-Transformer, which enables global feature modeling of feature maps extracted by the Convolutional Feature Extraction Module (CFEM). This design maximally alleviates noise interference in ultrasound images. The Bidirectional Interaction Directional Variance Attention module (BIDVA) dynamically calculates attention weights using the variance of input tensors along both vertical and horizontal directions. This allows the model to focus more effectively on regions with rich information in the image. The vertical and horizontal features are interactively combined to enhance the model's representational capability. During the model training process, we designed a Multi-Dimensional Loss function (MD Loss) to stretch the boundary distance between different classes and reduce the distance between samples of the same class. Additionally, the MD Loss function helps mitigate issues related to class imbalance in the dataset. We evaluated our network model using the public TNCD dataset and a private dataset. The results show that our network achieved an accuracy of 76.55% on the TNCD dataset and 93.02% on the private dataset. Compared to other state-of-the-art classification networks, our model outperformed them across all evaluation metrics.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1088/2057-1976/ad9f68 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!