Severity: Warning
Message: file_get_contents(https://...@pubfacts.com&api_key=b8daa3ad693db53b1410957c26c9a51b4908&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 176
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 176
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 250
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3122
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 575
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 489
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 316
Function: require_once
Accurate and efficient motion estimation is a crucial component of real-time ultrasound elastography (USE). However, obtaining radiofrequency ultrasound (RF) data in clinical practice can be challenging. In contrast, although B-mode (BM) data is readily available, elastographic data derived from BM data results in sub-optimal elastographic images. Furthermore, existing conventional ultrasound devices (e.g., portable devices) cannot provide elastography modes, which has become a significant obstacle to the widespread use of traditional ultrasound devices. To address the challenges above, we developed a teacher-student guided knowledge distillation for an unsupervised convolutional neural network (TSGUPWC-Net) to improve the accuracy of BM motion estimation by employing a well-established convolutional neural network (CNN) named modified pyramid warping and cost volume network (MPWC-Net). A pre-trained teacher model based on RF is utilized to guide the training of a student model using BM data. Innovations outlined below include employing spatial attention transfer at intermediate layers to enhance the guidance effect of the model. The loss function consists of smoothness of the displacement field, knowledge distillation loss, and intermediate layer loss. We evaluated our method on simulated data, phantoms, and in vivo ultrasound data. The results indicate that our method has higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) values in axial strain estimation than the model trained on BM. The model is unsupervised and requires no ground truth labels during training, making it highly promising for motion estimation applications.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s11517-024-03078-z | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!