Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In order to achieve this objective, self-driving vehicles are equipped with sensors that are used to sense and perceive both their surroundings and the faraway environment, using further advances in communication technologies, such as 5G. In the meantime, local perception, as with human beings, will continue to be an effective means for controlling the vehicle at short range. In the other hand, extended perception allows for anticipation of distant events and produces smarter behavior to guide the vehicle to its destination while respecting a set of criteria (safety, energy management, traffic optimization, comfort). In spite of the remarkable advancements of sensor technologies in terms of their effectiveness and applicability for AV systems in recent years, sensors can still fail because of noise, ambient conditions, or manufacturing defects, among other factors; hence, it is not advisable to rely on a single sensor for any of the autonomous driving tasks. The practical solution is to incorporate multiple competitive and complementary sensors that work synergistically to overcome their individual shortcomings. This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments. Specifically, it focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping. The article concludes by highlighting some of the current trends and possible future research directions.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7436174 | PMC |
http://dx.doi.org/10.3390/s20154220 | DOI Listing |
Brief Bioinform
November 2024
Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Minhang District, Shanghai 200240, China.
Studying the changes in cellular transcriptional profiles induced by small molecules can significantly advance our understanding of cellular state alterations and response mechanisms under chemical perturbations, which plays a crucial role in drug discovery and screening processes. Considering that experimental measurements need substantial time and cost, we developed a deep learning-based method called Molecule-induced Transcriptional Change Predictor (MiTCP) to predict changes in transcriptional profiles (CTPs) of 978 landmark genes induced by molecules. MiTCP utilizes graph neural network-based approaches to simultaneously model molecular structure representation and gene co-expression relationships, and integrates them for CTP prediction.
View Article and Find Full Text PDFTransl Vis Sci Technol
January 2025
School of Optometry and Vision Science, University of New South Wales, Sydney, Australia.
Purpose: The purpose of this study was to develop and validate a deep-learning model for noninvasive anemia detection, hemoglobin (Hb) level estimation, and identification of anemia-related retinal features using fundus images.
Methods: The dataset included 2265 participants aged 40 years and above from a population-based study in South India. The dataset included ocular and systemic clinical parameters, dilated retinal fundus images, and hematological data such as complete blood counts and Hb concentration levels.
Transl Vis Sci Technol
January 2025
Department of Biomedical Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand.
Purpose: The purpose of this study was to develop a deep learning approach that restores artifact-laden optical coherence tomography (OCT) scans and predicts functional loss on the 24-2 Humphrey Visual Field (HVF) test.
Methods: This cross-sectional, retrospective study used 1674 visual field (VF)-OCT pairs from 951 eyes for training and 429 pairs from 345 eyes for testing. Peripapillary retinal nerve fiber layer (RNFL) thickness map artifacts were corrected using a generative diffusion model.
Proc Natl Acad Sci U S A
January 2025
Department of Psychology, City College, City University of New York, New York, NY 10031.
Looking at the world often involves not just seeing things, but feeling things. Modern feedforward machine vision systems that learn to perceive the world in the absence of active physiology, deliberative thought, or any form of feedback that resembles human affective experience offer tools to demystify the relationship between seeing and feeling, and to assess how much of visually evoked affective experiences may be a straightforward function of representation learning over natural image statistics. In this work, we deploy a diverse sample of 180 state-of-the-art deep neural network models trained only on canonical computer vision tasks to predict human ratings of arousal, valence, and beauty for images from multiple categories (objects, faces, landscapes, art) across two datasets.
View Article and Find Full Text PDFEndocr Pathol
January 2025
Department of Computer Engineering, Koc University, Istanbul, Turkey.
Pancreatic neuroendocrine tumors (PanNETs) are a heterogeneous group of neoplasms that include tumors with different histomorphologic characteristics that can be correlated to sub-categories with different prognoses. In addition to the WHO grading scheme based on tumor proliferative activity, a new parameter based on the scoring of infiltration patterns at the interface of tumor and non-neoplastic parenchyma (tumor-NNP interface) has recently been proposed for PanNET categorization. Despite the known correlations, these categorizations can still be problematic due to the need for human judgment, which may involve intra- and inter-observer variability.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!