Objective: Echocardiographic videos are commonly used for automatic semantic segmentation of endocardium, which is crucial in evaluating cardiac function and assisting doctors to make accurate diagnoses of heart disease. However, this task faces two distinct challenges: one is the edge blurring, which is caused by the presence of speckle noise or excessive de-noising operation, and the other is the lack of an effective feature fusion approach for multilevel features for obtaining accurate endocardium.
Methods: In this study, a deep learning model, based on multilevel edge perception and calibration fusion is proposed to improve the segmentation performance.