We study the use of raw ultrasound waveforms, often referred to as the "Radio Frequency" (RF) data, for the semantic segmentation of ultrasound scans to carry out dense and diagnostic labeling. We present W-Net, a novel Convolution Neural Network (CNN) framework that employs the raw ultrasound waveforms in addition to the grey ultrasound image to semantically segment and label tissues for anatomical, pathological, or other diagnostic purposes. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyzes ultrasound raw RF data along with the grey image.
View Article and Find Full Text PDF