Widespread adaptation of autonomous, robotic systems relies greatly on safe and reliable operation, which in many cases is derived from the ability to maintain accurate and robust perception capabilities. Environmental and operational conditions as well as improper maintenance can produce calibration errors inhibiting sensor fusion and, consequently, degrading the perception performance and overall system usability. Traditionally, sensor calibration is performed in a controlled environment with one or more known targets. Such a procedure can only be carried out in between operations and is done manually; a tedious task if it must be conducted on a regular basis. This creates an acute need for online targetless methods, capable of yielding a set of geometric transformations based on perceived environmental features. However, the often-required redundancy in sensing modalities poses further challenges, as the features captured by each sensor and their distinctiveness may vary. We present a holistic approach to performing joint calibration of a camera-lidar-radar trio in a representative autonomous driving application. Leveraging prior knowledge and physical properties of these sensing modalities together with semantic information, we propose two targetless calibration methods within a cost minimization framework: the first via direct online optimization, and the second through self-supervised learning (SSL).
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11315887 | PMC |
http://dx.doi.org/10.1038/s41598-024-53009-z | DOI Listing |
JMIR Med Inform
January 2025
Department of Internal Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.
Background: The two most commonly used methods to identify frailty are the frailty phenotype and the frailty index. However, both methods have limitations in clinical application. In addition, methods for measuring frailty have not yet been standardized.
View Article and Find Full Text PDFJ Nucl Med
January 2025
Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts.
Large language models (LLMs) are poised to have a disruptive impact on health care. Numerous studies have demonstrated promising applications of LLMs in medical imaging, and this number will grow as LLMs further evolve into large multimodal models (LMMs) capable of processing both text and images. Given the substantial roles that LLMs and LMMs will have in health care, it is important for physicians to understand the underlying principles of these technologies so they can use them more effectively and responsibly and help guide their development.
View Article and Find Full Text PDFInt J Comput Assist Radiol Surg
January 2025
Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstr. 3, 91058, Erlangen, Bayern, Germany.
Purpose: Breast cancer remains one of the most prevalent cancers globally, necessitating effective early screening and diagnosis. This study investigates the effectiveness and generalizability of our recently proposed data augmentation technique, attention-guided erasing (AGE), across various transfer learning classification tasks for breast abnormality classification in mammography.
Methods: AGE utilizes attention head visualizations from DINO self-supervised pretraining to weakly localize regions of interest (ROI) in images.
Radiol Artif Intell
January 2025
From the Department of Radiation Oncology (A.S.G., V.H., H.S.) and Department of Radiology and Imaging Sciences (B.D.W.), Emory University School of Medicine, 1701 Uppergate Dr, C5008 Winship Cancer Institute, Atlanta, GA 30322; Department of Radiology, University of Miami {School of Medicine?}, Miami, Fla (S.S., A.A.M.); Department of {Radiology?} Northwestern University {Feinberg School of Medicine?}, Chicago, Ill (L.A.D.C.); Department of Biostatistics and Bioinformatics, Emory University Rollins School of Public Health, Atlanta, Ga (Y.L.); Department of Psychology, Emory University, Atlanta, Ga (M.T.); and Department of Radiology, Duke University Medical Center, Durham, NC (B.J.S.).
Purpose To develop and evaluate the performance of NNFit, a self-supervised deep-learning method for quantification of high-resolution short echo-time (TE) echo-planar spectroscopic imaging (EPSI) datasets, with the goal of addressing the computational bottleneck of conventional spectral quantification methods in the clinical workflow. Materials and Methods This retrospective study included 89 short-TE whole-brain EPSI/GRAPPA scans from clinical trials for glioblastoma (Trial 1, May 2014-October 2018) and major-depressive-disorder (Trial 2, 2022- 2023). The training dataset included 685k spectra from 20 participants (60 scans) in Trial 1.
View Article and Find Full Text PDFAquat Toxicol
January 2025
School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, 114051, China; Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, 325001, China. Electronic address:
As compound concentrations in aquatic environments increase, the habitat degradation of aquatic organisms underscores the growing importance of studying the impact of chemicals on diverse aquatic populations. Understanding the potential impacts of different chemical substances on different species is a necessary requirement for protecting the environment and ensuring sustainable human development. In this regard, deep learning methods offer significant advantages over traditional experimental approaches in terms of cost, accuracy, and generalization ability.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!