How do humans acquire a meaningful understanding of the world with little to no supervision or semantic labels provided by the environment? Here we investigate embodiment with a closed loop between action and perception as one key component in this process. We take a close look at the representations learned by a deep reinforcement learning agent that is trained with high-dimensional visual observations collected in a 3D environment with very sparse rewards. We show that this agent learns stable representations of meaningful concepts such as doors without receiving any semantic labels. Our results show that the agent learns to represent the action relevant information, extracted from a simulated camera stream, in a wide variety of sparse activation patterns. The quality of the representations learned shows the strength of embodied learning and its advantages over fully supervised approaches.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2020.11.004 | DOI Listing |
Sensors (Basel)
January 2025
School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing 100101, China.
Human activity recognition by radar sensors plays an important role in healthcare and smart homes. However, labeling a large number of radar datasets is difficult and time-consuming, and it is difficult for models trained on insufficient labeled data to obtain exact classification results. In this paper, we propose a multiscale residual weighted classification network with large-scale, medium-scale, and small-scale residual networks.
View Article and Find Full Text PDFSensors (Basel)
December 2024
Master's Program in Information and Computer Science, Doshisha University, Kyoto 610-0394, Japan.
The semantic segmentation of bone structures demands pixel-level classification accuracy to create reliable bone models for diagnosis. While Convolutional Neural Networks (CNNs) are commonly used for segmentation, they often struggle with complex shapes due to their focus on texture features and limited ability to incorporate positional information. As orthopedic surgery increasingly requires precise automatic diagnosis, we explored SegFormer, an enhanced Vision Transformer model that better handles spatial awareness in segmentation tasks.
View Article and Find Full Text PDFInt J Neural Syst
January 2025
Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313001, P. R. China.
Visual semantic decoding aims to extract perceived semantic information from the visual responses of the human brain and convert it into interpretable semantic labels. Although significant progress has been made in semantic decoding across individual visual cortices, studies on the semantic decoding of the ventral and dorsal cortical visual pathways remain limited. This study proposed a graph neural network (GNN)-based semantic decoding model on a natural scene dataset (NSD) to investigate the decoding differences between the dorsal and ventral pathways in process various parts of speech, including verbs, nouns, and adjectives.
View Article and Find Full Text PDFBiomed Eng Lett
January 2025
Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
Unlabelled: A weight-bearing lateral radiograph (WBLR) of the foot is a gold standard for diagnosing adult-acquired flatfoot deformity. However, it is difficult to measure the major axis of bones in WBLR without using auxiliary lines. Herein, we develop semantic segmentation with a deep learning model (DLm) on the WBLR of the foot for enhanced diagnosis of pes planus and pes cavus.
View Article and Find Full Text PDFJ Biomed Semantics
January 2025
Database Center for Life Science, Joint Support-Center for Data Science Research, Research Organization of Information and Systems, Kashiwa, Chiba, Japan.
Background: TogoID ( https://togoid.dbcls.jp/ ) is an identifier (ID) conversion service designed to link IDs across diverse categories of life science databases.
View Article and Find Full Text PDFEnter search terms and have AI summaries delivered each week - change queries or unsubscribe any time!