Indicator dilution methods have a long history in the quantification of both macro- and microvascular blood flow in many clinical applications. Various models have been employed in the past to isolate the primary pass of an indicator after an intravenous bolus injection. The use of indicator dilution techniques allows for the estimation of hemodynamic parameters of a tumor or organ and thus may lead to useful diagnostic and therapy monitoring information. In this paper, we review and discuss the properties of the lognormal function, the gamma variate function, the diffusion with drift models, and the lagged normal function, which have been used to model indicator dilution curves in different fields of medicine. We fit these models to contrast-enhanced ultrasound time-intensity curves from liver metastases and the ovine corpora lutea. We evaluate the models' performance on the image data and compare their predictions for hemodynamic-related parameters such as the area under the curve, the mean transit time, the full-width at half-maximum, the time to the peak intensity, and wash-in time. The models that best fit the experimental data are the lognormal function and the diffusion with drift.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TUFFC.2010.1550 | DOI Listing |
Enter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!