Introduction: Distress is common among cancer patients, especially those undergoing surgery. However, no study has systematically analyzed distress trends in this population. The purpose of this study was to systematically review perioperative rates of distress, as well as differences across cancer types, in cancer patients undergoing surgical intervention.
View Article and Find Full Text PDFThis study compared the image quality of conventional multiplexed sensitivity-encoding diffusion-weighted imaging (MUSE-DWI) and deep learning MUSE-DWI with that of vendor-specific deep learning (DL) reconstruction applied to bladder MRI. This retrospective study included 57 patients with a visible bladder mass. DWI images were reconstructed using a vendor-provided DL algorithm (AIR Recon DL; GE Healthcare)-a CNN-based algorithm that reduces noise and enhances image quality-applied here as a prototype for MUSE-DWI.
View Article and Find Full Text PDFThe purpose of this Medical Physics Practice Guideline (MPPG) is to describe the minimum level of medical physics support deemed prudent for the practice of linear-accelerator, photon-based (linac) stereotactic radiosurgery (SRS), and stereotactic body radiation therapy (SBRT) services. This report is an update of MPPG 9.a published in 2017.
View Article and Find Full Text PDFThe scarecrow (scro) gene encodes a fly homolog of mammalian Nkx2.1 which is vital for early fly development as well as for optic lobe development. Previously, scro was reported to produce a circular RNA (circRNA) in addition to traditional mRNAs.
View Article and Find Full Text PDFThe application of large language models in materials science has opened new avenues for accelerating materials development. Building on this advancement, we propose a novel framework leveraging large language models to optimize experimental procedures for synthesizing quantum dot materials with multiple desired properties. Our framework integrates the synthesis protocol generation model and the property prediction model, both fine-tuned on open-source large language models using parameter-efficient training techniques with in-house synthesis protocol data.
View Article and Find Full Text PDF