The imperative development of point-of-care diagnosis for accurate and rapid medical image segmentation, has become increasingly urgent in recent years. Although some pioneering work has applied complex modules to improve segmentation performance, resulting models are often heavy, which is not practical for the modern clinical setting of point-of-care diagnosis. To address these challenges, we propose UltraNet, a state-of-the-art lightweight model that achieves competitive performance in segmenting multiple parts of medical images with the lowest parameters and computational complexity. To extract a sufficient amount of feature information and replace cumbersome modules, the Shallow Focus Float Block (ShalFoFo) and the Dual-stream Synergy Feature Extraction (DuSem) are respectively proposed at both shallow and deep levels. ShalFoFo is designed to capture finer-grained features containing more pixels, while DuSem is capable of extracting distinct deep semantic features from two different perspectives. By jointly utilizing them, the accuracy and stability of UltraNet segmentation results are enhanced. To evaluate performance, UltraNet's generalization ability was assessed on five datasets with different tasks. Compared to UNet, UltraNet reduces the parameters and computational complexity by 46 times and 26 times, respectively. Experimental results demonstrate that UltraNet achieves a state-of-the-art balance among parameters, computational complexity, and segmentation performance. Codes are available at https://github.com/Ziii1/UltraNet .
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1007/s12539-024-00682-3 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!